Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Virtual effect classes #4549

Draft
wants to merge 4 commits into
base: main
Choose a base branch
from
Draft

Conversation

TripleWhy
Copy link
Contributor

@TripleWhy TripleWhy commented Feb 12, 2025

Ignore some of the changes of this PR...

What I am trying to figure out with this PR is if you'd be interested in on overhaul of how effects work. I created a virtual base Effect class, and implemented 2 effects as implementations of that:

wled00/effects/Effect.h
wled00/effects/StaticEffect.h
wled00/effects/PaletteEffect.h

Note that this effect class does not handle a segment as a whole like current effects do, but instead handles one pixel at a time. See how this is called in wled00/FX_fcn.cpp WS2812FX::service().

Advantages of this approach:

  • not every effect needs to understand the inner workings of how segments work
  • when using sparse setups, such as matrices where not all pixels map to physical LEDs, this could potentially safe some work. (Or at least implementing this kind of saving would require one change to service rather than requiring changing of all effects.)
  • theoretically, this design should work will with potential future sparse 3D setups (where instead of implementing dense LED cubes, you throw a bunch of LEDs into space, then map their 3D positions)
  • things like the 2D expansion that is currently used in the palette effect only could/would have to be implemented globally for all effects rather than once per effect
  • EDIT: Another potential advantage might be that this design would allow effects being layered on top of each other without losing information due to brightness being applied immediately (however I don't think this would work for effects that use color values from previous frames)

Disadvantages:

  • theoretically one virtual function call per pixel. (A good compiler should be able to resolve the virtual function once per service() call instead of per pixel iteration, but c++ provides no way of enforcing that.)
  • currently, this code relies on c++17 and therefore only compiles on esp 8266

Outlook: currently, I only implemented fading transitions (untested). Theoretically, it would be possible to use an extended version of this Effect class to handle transitions, so that there would only have to be one code base for effects and transitions.

What do you think? Are you interested in this kind of work? Should I invest some time into advancing it?

Copy link

coderabbitai bot commented Feb 12, 2025

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@DedeHai
Copy link
Collaborator

DedeHai commented Feb 13, 2025

I like the base idea of only calculating pixels that actually exist, as you say, big advantage for mapped setups.

  • how does this impact code size and execution speed for normal setups?
  • how would you apply blurring which relies on neighbouring pixels?

@TripleWhy
Copy link
Contributor Author

TripleWhy commented Feb 13, 2025

I didn't produce a clean version that only contains the effect changes, so it's a bit hard to say. But the FPS are indeed a bit lower than on main, which is what I would expect because of the virtual functions and no loops that can be optimized as a whole per effect. I believe it opens up more room for other optimizations though.

Test: 16x16 dense 2D matrix, palette effect.

This PR:
38 FPS
RAM: [====== ] 56.7% (used 46464 bytes from 81920 bytes)
Flash: [======= ] 73.7% (used 770055 bytes from 1044464 bytes)

Main 2d6ad41:
47 FPS
RAM: [====== ] 57.3% (used 46964 bytes from 81920 bytes)
Flash: [======== ] 84.2% (used 879751 bytes from 1044464 bytes)

Blurring:
What is blurring, how is it currently implemented? Where can I look at the code to see some?

If it is just part of some effects, I guess it would continue working the way it did before? If it requires reading the current state of the pixels, that is already part of this PR: The effect function has x, y, and current color as inputs and outputs one new color.

@DedeHai
Copy link
Collaborator

DedeHai commented Feb 13, 2025

that is a significant drop in FPS, unless this can be optimized, its a no-go.

look at void Segment::blur2D() for example. there are other functions to manipulate pixels in that file as well that need to work.

@blazoncek
Copy link
Collaborator

Thanks.
This is a hefty change I'm afraid nobody would want to undertake (to reimplement all existing effects if that is the goal).

Regarding "segment workings awareness": I've had a lengthy discussion of pros and cons of the approach PixelBlaze has taken in the past (and why I chose not to go that way with WLED). Some things are easy to implement the way PixelBlaze does it, some are nearly impossible. So the best would be to extend segment API to allow both approaches. My 1st attempt was to create a setPixelColor() that takes relative coordinate (range 0-1, but it can also be -1-1) which would then require appropriate mapping to map certain coordinate into actual pixel drawing. This would also allow sparse set-ups. I tried but in the end abandoned the attempt as it would require writing a new effect that would use it.
The issue (I see) is how would you mix regular/legacy and new/coordinate-agnostic effects or. to put it the other way around how would you mix mappings?

If you are unaware of PixelBlaze, please Google it. It has some nice approaches (including sparse set-ups) but is unfortunately a closed source project.

@TripleWhy TripleWhy force-pushed the virtualEffectClasses branch from 91a7e7d to 23cafab Compare February 25, 2025 13:42
@TripleWhy
Copy link
Contributor Author

TripleWhy commented Feb 25, 2025

Here is a new version that

  • only contains the effect class changes.
  • avoids repeated virtual function resolution by avoiding virtual functions by emulating them with function pointers and static_casts. This slightly improved the FPS.
  • lazily calls getPixelColorXY, so it is only called when needed, because most effects don't need it at all, or not for all pixels. For the palette effect it is currently never called. This improved the FPS a lot.

In any case, calling getPixelColorXY was intended only for effects that use the previous frame's pixel colors. For stacked effects, this PR's approach would be much faster than the current implementation, due to not needing to call getPixelColorXY for stacked effects.

Considering @blazoncek's segment blending however, both stacking approaches should become obsolete anyway.
Are there effects that use the pixel color from the previous frame? What do they do with it?
If we could get rid of getPixelColorXY, that would simplify this code quite a bit.

Using these changes, I managed to get the same performance compared to the main branch I developed it on (47 FPS). This current push is version rebased to current main, so here are some numbers for that. For simplicity, I compiled both with WLED_DISABLE_MODE_BLEND. Same test as before: 16x16 dense matrix with rotating palette effect on esp 8266:

fa2949a main WLED_DISABLE_MODE_BLEND
RAM: [====== ] 57.1% (used 46812 bytes from 81920 bytes)
Flash: [======== ] 84.6% (used 883107 bytes from 1044464 bytes)
43 FPS

23cafab virtualEffectClasses WLED_DISABLE_MODE_BLEND
RAM: [====== ] 56.6% (used 46348 bytes from 81920 bytes)
Flash: [======= ] 74.4% (used 777139 bytes from 1044464 bytes)
42 FPS

@blazoncek
Copy link
Collaborator

Considering @blazoncek's segment blending

According to @DedeHai the speed increase with blending approach is about 10%. As it does away with traversing bus logic.
This (segment layering/blending) alone does not prohibit any change in how effect functions are created. However encapsulating them into class(es) may prove challenging to use them in usermods or other custom creations (like overlays, etc) but I am not very well versed in C++ to know for sure (see work by @willmmiles for converting usermods into libraries).

As mentioned earlier I'm afraid who will take on the effort of changing each and every effect out of about 200+ currently available.

But if we put that aside my main question is "what was the driving force for this change"? If I understand implementation correctly effect no longer renders entire "canvas" in a single call but only returns a single pixel color for supplied coordinates (WS2812FX::service() being responsible to call effect function for each pixel in segment). Correct?
Except for the (now redundant) calls to SEGMENT.virtualXXXXX() I do not see any real difference. Effect will still need to do something with X and Y but without knowing what max X and max Y are. Hence it will not be able to scale an image for example or draw rectangle on the border of frame.

I've considered PixelBlaze approach back in 2022 which takes a similar approach as this PR for some of its "patterns" but ultimately decided it would be too much hassle to modify every effect and the benefit would be minor (as it currently stands). If someone could figure out the approach to allow two different APIs to coexist (i.e. old where effect draws entire canvas in a single call and new where it has no knowledge of canvas but responds to virtual (X,Y) coordinates) that would be the best IMO.

Are there effects that use the pixel color from the previous frame? What do they do with it?

There are plenty. Especially 2D effects that do a lot of pixel moving (i.e. shift all pixels left, etc). Some use old pixel values to evolve pixels over time. If you want to omit that then effect will need to store such information elsewhere. Making rewrite even more difficult.

@TripleWhy
Copy link
Contributor Author

TripleWhy commented Feb 26, 2025

According to @DedeHai the speed increase with blending approach is about 10%. As it does away with traversing bus logic.

Expect another uplift for every call to getPixelColor that blending can eliminate. In my version here where I went from one call per pixel to zero calls, the improvement was ~28% in FPS.

However encapsulating them into class(es) may prove challenging to use them in usermods or other custom creations (like overlays, etc)

Well that should be about the same effect API change as it is for the main effects. IIRC usermods are unmaintained though...
Anyway without breaking things, options for progress are limited. I waited to introduce this concept until after 0.15 was released; I don't see an issue with introducing major changes when you start developing a new version.

As mentioned earlier I'm afraid who will take on the effort of changing each and every effect out of about 200+ currently available.

Don't worry about that, I'd do it if you would merge such a change. But its a bit of work, so that's why I'm asking if you want this at all before doing it.

If I understand implementation correctly effect no longer renders entire "canvas" in a single call but only returns a single pixel color for supplied coordinates (WS2812FX::service() being responsible to call effect function for each pixel in segment). Correct?

Correct.

Except for the (now redundant) calls to SEGMENT.virtualXXXXX() I do not see any real difference.

Well first of all it avoids some code repetitions. And it enables some features to be implemented globally, rather than once per effect. It also removes need to the check and adjust each and every single one of the effects if you make some change (like making indexes unsigned, implementing effect layering, or changing the API for setting and getting pixel colors). I'm sure you remember the last time you had to do that, but you shouldn't have to.

In the long run, it would make a lot of sense to iterate physical LEDs and map segments to it rather than the other way round as it is now.

  • That would avoid unnecessary work for sparse setups.
  • That would enable non-grid mapped and 3D setups. No longer do you have to iterate an infinite space, but a fixed number of LEDs.
  • It would allow information to be passed from one effect to the next layered one without losing information, due to applied and un-applied brightness etc. (this is what oldColor was intended for).
  • Segment blending would be something like 1 - 5 lines of code (plus blend math functions/lambdas and configuration code).

Effect will still need to do something with X and Y but without knowing what max X and max Y are.

Some effects need that, yes. There are solutions to that, for example:

  • Keep everything as it is, the effects read the dimensions from globals at the moment, keeping that wouldn't break anything.
  • Pass the max values to the effect as well, in addition to x and y.
  • Use relative coordinates.

If someone could figure out the approach to allow two different APIs to coexist (i.e. old where effect draws entire canvas in a single call and new where it has no knowledge of canvas but responds to virtual (X,Y) coordinates) that would be the best IMO.

That's simple: Keep everything as it is in main, and add a new (full canvas) effect function that does what I do in this POC. However, it would lose pretty much all of the advantages, as it would not enable most of (any of?) the refactoring possibilities discussed in this thread.

Are there effects that use the pixel color from the previous frame? What do they do with it?

There are plenty. Especially 2D effects that do a lot of pixel moving (i.e. shift all pixels left, etc). Some use old pixel values to evolve pixels over time.

Well that doesn't sound safe... especially when layering or transitions are involved.

"what was the driving force for this change"?

Well to be perfectly honest it's just that the current state of the code base is so poor and I would like to improve it, which would also make working on it less hard. I'm an idealist, don't question my motives 😅

I listed a few advantages in the PR's description, but here are a few more :)
(I wrote this before I wrote most of the above, so there are some repetitions)

  • As you identified correctly, relative coordinates (be it floats or scaled integers) would be possible with this, allowing for dimension-agnostic effects.
  • Dimension-agnostic effects could be used to achieve the anamorphic mode switch of the palette effect globally instead of per effect, in addition to other mentioned advantages (sparse setups, 3d setups, 2d expansion (ok that one is enabled by the api change in general)).
  • getPixelColor would be unambiguous and return the previous frame's color, not the previous effect's color. It would still only return the final color after blending, not the current effect's previous color though.
  • Avoids information loss due to segment brightness. (see above)
  • Effect transitions (the ones disabled with WLED_DISABLE_MODE_BLEND) would be much simpler to implement
  • Segment layering (the one Blaž is(?) working on) would be easy to implement

@DedeHai
Copy link
Collaborator

DedeHai commented Feb 26, 2025

Don't worry about that, I'd do it if you would merge such a change. But its a bit of work, so that's why I'm asking if you want this at all before doing it.

I have updated/migrated quite a few effects. I must admit I did not yet look at what your code does exactly (its a bit above my C++ understanding) but I can tell you that migrating an effect can take several hours, so you need to be aware that you are volunteering potentially hundreds of hours of tedious work in migrating and testing. If you think its worth it, also consider the particle system is now part of WLED and needs handling of this.

@blazoncek
Copy link
Collaborator

There are two nice examples that may prove difficult to implement with new system: Tetrix and Bouncing Balls.
In the same category are all temporal effects (those that rely on time instead of place).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants