-
Notifications
You must be signed in to change notification settings - Fork 6.5k
FEAT: SelectorGroupChat could using stream inner select_prompt #6286
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
FEAT: SelectorGroupChat could using stream inner select_prompt #6286
Conversation
Can we address this first? #6161. Otherwise the |
if self._streaming: | ||
message: CreateResult | str = "" | ||
async for _message in self._model_client.create_stream(messages=select_speaker_messages): | ||
if isinstance(_message, LLMStreamEndEvent): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
_message
actualy has the type str
or CreateResult
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You’re right, that was my mistake.
I misunderstood how create_stream()
works and thought the log message was being leaked.
But actually, the last message is always a CreateResult, and the if
block just skips over LLMStreamEndEvent
, so it worked by accident.
Thanks for pointing it out!
…t_streaming_from_the_selector_of_selectorgroupchat_#6145
I’ll take a look at #6161. And, just change response = await self._model_client.create(messages=select_speaker_messages) to if self._streaming:
message: CreateResult | str = ""
async for _message in self._model_client.create_stream(messages=select_speaker_messages):
message = _message
if isinstance(message, CreateResult):
response = message
else:
raise ValueError("Model failed to select a speaker.")
else:
response = await self._model_client.create(messages=select_speaker_messages) So I don't think there would be any problem solving that issue — BTW, what kind of help do you expect for that issue? Do you want me to take the lead on it? |
Yes, please address #6161 and then this one. |
…ctor_of_selectorgroupchat_#6145
…ctor_of_selectorgroupchat_#6145
@ekzhu I resolved merge conflict. |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #6286 +/- ##
==========================================
- Coverage 77.55% 77.52% -0.03%
==========================================
Files 202 202
Lines 14759 14770 +11
==========================================
+ Hits 11446 11451 +5
- Misses 3313 3319 +6
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
...n/packages/autogen-agentchat/src/autogen_agentchat/teams/_group_chat/_selector_group_chat.py
Outdated
Show resolved
Hide resolved
if self._streaming: | ||
message: CreateResult | str = "" | ||
async for _message in self._model_client.create_stream(messages=select_speaker_messages): | ||
message = _message |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See how emitting event is done in BaseGroupChatManager
:
await self._output_message_queue.put(ModelClientStreamingChunkEvent(content=_message))
to emit the chunk.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Howabout, call _log_speaker_selection
after makes total response?
Now I know about why do not use it.
like that case (adding [ and ] for more looks easy. look likely BaseGroupChatManager output) looks weird
---------- user ----------
say hello world
---------- SelectorGroupChatManager ----------
[hello]
---------- hello ----------
Hello, World!
---------- SelectorGroupChatManager ----------
Model failed to select a speaker after 3, using the previous speaker.
[hello][hello][,][ world][!][hello][,][ world][!]
---------- hello ----------
Hello, World!
Real three output is that
[hello] [hello, world!] [hello, world!]
So, howabout call await self._output_message_queue.put(ModelClientStreamingChunkEvent(content=_message))
after, set response
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And... without that code,
output is like it
---------- user ----------
say hello world
---------- SelectorGroupChatManager ----------
['hello']
---------- hello ----------
Hello, World!
Model failed to select a speaker after 3, using the previous speaker.
---------- SelectorGroupChatManager ----------
['hello']
---------- hello ----------
Hello, World!
so... with that line, default _log_speaker_selection
is looks not works.
(however, actually that function is called, just do not shown)
Clearly I need to your help for understand that funciton.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
_log_speaker_selection
shouldn't be mixed in here. Keep it separate in the base class.
In this PR we are addressing the SelectorGroupChat
-specific issue.
After every sequence of ModelClientStreamingEvent
, it should be followed with a separate message which has the full content of the streamed token. This can be a SelectorEvent
, for example.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
------ Delete that comment ------
Cause, now understand of your direction.
…t_streaming_from_the_selector_of_selectorgroupchat_#6145
ce0ac44
to
29f2c0f
Compare
…ctor_of_selectorgroupchat_#6145
@SongChiYoung generally, I think it is more important to emit the inner |
…ctor_of_selectorgroupchat_#6145
@ekzhu Please check this PR. |
Why are these changes needed?
This PR updates
SelectorGroupChat
to support streaming mode forselect_speaker
.It introduces a
streaming
argument — when set toTrue
,select_speaker
will usecreate_streaming()
instead ofcreate()
.Additional context
Some models (e.g., QwQ) only work properly in streaming mode.
To support them, the prompt selection step in
SelectorGroupChat
must also run withstreaming=True
.Related issue number
Closes #6145
Checks