-
Notifications
You must be signed in to change notification settings - Fork 482
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
removing many dynamic schedules at once is not performant #677
Comments
actually - it appears that we'd only have to update the |
Perhaps a different suggestion. It seems like the root issue here is that there is no story to migrate from dynamic schedules back to a static schedule, if you make the same mistake we did. Instead of the above idea where you can remove many schedules at once, what about purging the entire dynamic schedule: def purge_dynamic_schedule!
if !dynamic
redis.del(:persistent_schedules)
redis.del(:schedules_changed)
reload_schedule!
end
end this way it's "opt-in": users may wish to drain their dynamic schedules naturally after moving off. but they could optionally just kill the entire |
@ledhed2222 Bulk removal is welcome! Purging as well. Good call. |
howdy. i'm working on this PR but I'm having trouble passing the test suite. could you provide some insight?
there isn't any documentation on your test suite, so even just a few pointers will probably help me here :) thank you! would love to help get these features in |
Bump 😊 |
bump ^ |
I have a situation where ~80K dynamic jobs were created before we realized that this approach was not going to work (we're using a static schedule now that is much saner). To clean up all of the persisted schedules using the public interface is straightforward:
However, this will not only make one
hdel
call to redis for each key, but will also re-load the entire schedule each time!I can imagine a simple change to the
remove_schedule
method that would allow a variable number of schedules to be removed at once:What do you guys think about that? Happy to make a PR if that makes sense.
The text was updated successfully, but these errors were encountered: