- 
                Notifications
    You must be signed in to change notification settings 
- Fork 1.6k
fix: do not create many redis subscriptions per token - fast yielding background #4419
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: do not create many redis subscriptions per token - fast yielding background #4419
Conversation
| @masenf any idea why fast yielding background tasks seem to lock up after some state updates? | 
| @masenf i accidentally pushed unfinished changes and forgot to mention that the test needs to be run with redis | 
bonus: properly cleanup StateManager connections on disconnect
| I pushed 389f4c7 which fixes the issue. However, I would merge this as-is, because it works and does not change the behavior much. I would then try implementing the other idea in a followup. | 
| state_is_locked = await self._try_get_lock(lock_key, lock_id) | ||
|  | ||
| @override | ||
| async def disconnect(self, token: str): | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is probably wrong. Clients can reconnect. Need to check if there are any unexpired redis tokens
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
edit: oops i thought this was the StateManagerMemory.disconnect func; either way we need to be careful about reconnects, which can be triggered by a simple page refresh
i had a draft comment on my in progress review calling out this function, but i suppose your testing found it first. we need to keep the in memory states around. we do have an internal bug to add an expiry for memory state manager, but it just hasn't been high priority because we assume no one is using memory state manager in production
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe one asyncio Lock is enough, we won't need to garbage collect then
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note to myself: subscribe to redis expire events to clean up locks, may be easier with #4459
| @masenf is it true that the current state locking does not guarantee fifo? Imo the longest waiting task should get the lock next | 
| I have a local draft where only one pubsub instance is created per reflex worker. All lock releases are distributed to the waiting tasks via asyncio locks | 
| Closing in favor of #5932, which was inspired by this PR and also incorporates this concept: 
 @benedikt-bartscher if you have some cycles to test on #5932 i'd appreciate any input you have. I'm working on some test cases for it, but since this isn't my main project at the moment, it might have to wait until next weekend. | 
No description provided.