fix: re-trigger deferred context frame push on UserStoppedSpeakingFrame#4367
Open
elliottventures wants to merge 2 commits into
Open
Conversation
Author
|
@markbackman Wanted to make sure you guys saw this one as well |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What
When
LLMAssistantAggregator._handle_function_call_resultruns whileself._user_speakingisTrue, the outer conditional currently drops the context push silently — unlike the bot-speaking branch, which sets_push_context_on_bot_stopped_speaking = Trueand re-triggers onBotStoppedSpeakingFrame. This PR mirrors that pattern for the user-side.Why
A short user trigger utterance whose transcription-driven turn-start races the function-call result hits this window:
[Transcription:user] [Next.]arrives_run_function_callfires (driven by the same transcription)_on_user_turn_startedsetsself._user_speaking = TrueFunctionCallResultFramearrives at the aggregatorif run_llm and not self._user_speaking:evaluates False → push silently dropped_on_user_turn_stoppedfires a few ms later, unsetting_user_speaking— but nothing re-evaluates the dropped pushResult: the tool response never reaches the LLM service. For Gemini Live specifically, this manifests as the model sitting silent waiting for a response to its function call, until the user speaks again and the interruption path kicks in. TTFB on affected turns can reach minutes.
The bot-speaking branch has had this covered via
_push_context_on_bot_stopped_speaking+ re-trigger inBotStoppedSpeakingFramefor a while. The user-side was missing the equivalent.The fix
Five small edits to
LLMAssistantAggregator, all mirroring the existing bot-side pattern:_push_context_on_user_stopped_speakingin__init__reset()andpush_context_frame()alongside the bot flagrun_llmcheck changed fromif run_llm and not self._user_speaking:toif run_llm:, with a newelif self._user_speaking:branch inside that sets the flag (before the existingelif self._bot_speaking:)process_frame'sUserStoppedSpeakingFramehandler flushes the deferred push if the flag is set and the bot isn't speakingTotal +18 / -1 lines.
Testing
tests/test_context_aggregators_universal.py— 37/37 pass pre- and post-change."Freezer is on top. Next.") stalled silently. After this fix: clean turn completion across 20+ test turns.Happy to add a targeted unit test for the race if a maintainer has a preferred shape — it needs a bit of plumbing to simulate the specific
UserStartedSpeakingFrame → FunctionCallResultFrame → UserStoppedSpeakingFrameordering.Related
Companion to #4366. Both surfaced during the same Gemini Live function-calling diagnosis.