feat(ui): add live status updates during agent execution#383
feat(ui): add live status updates during agent execution#3830xhis wants to merge 2 commits intousestrix:mainfrom
Conversation
Greptile SummaryThis PR adds real-time status messages to the TUI during agent execution, replacing the generic "Initializing" state with granular labels ("Compressing memory…", "Waiting for LLM provider…", "Generating response…", "Executing {tools}…", "Setting up sandbox environment…"). It also fixes two pre-existing bugs: a Rich
Confidence Score: 4/5
Important Files Changed
Prompt To Fix All With AIThis is a comment left during a code review.
Path: strix/interface/tui.py
Line: 1709-1715
Comment:
**Thinking blocks silently dropped on interrupted messages**
When a message has both `thinking_blocks` in its metadata and `metadata["interrupted"] == True`, the thinking block renderables collected in `renderables` are never included in the final output — the early `return` bypasses them entirely. This is a regression introduced by adding the thinking block logic above the `interrupted` check.
```suggestion
if metadata.get("interrupted"):
streaming_result = self._render_streaming_content(content)
interrupted_text = Text()
interrupted_text.append("\n")
interrupted_text.append("⚠ ", style="yellow")
interrupted_text.append("Interrupted by user", style="yellow dim")
return self._merge_renderables([*renderables, streaming_result, interrupted_text])
```
How can I resolve this? If you propose a fix, please make it concise.Last reviewed commit: "feat(ui): add live s..." |
strix/interface/tui.py
Outdated
| @@ -1669,7 +1714,18 @@ def _render_chat_content(self, msg_data: dict[str, Any]) -> Any: | |||
| interrupted_text.append("Interrupted by user", style="yellow dim") | |||
| return self._merge_renderables([streaming_result, interrupted_text]) | |||
There was a problem hiding this comment.
Thinking blocks silently dropped on interrupted messages
When a message has both thinking_blocks in its metadata and metadata["interrupted"] == True, the thinking block renderables collected in renderables are never included in the final output — the early return bypasses them entirely. This is a regression introduced by adding the thinking block logic above the interrupted check.
| if metadata.get("interrupted"): | |
| streaming_result = self._render_streaming_content(content) | |
| interrupted_text = Text() | |
| interrupted_text.append("\n") | |
| interrupted_text.append("⚠ ", style="yellow") | |
| interrupted_text.append("Interrupted by user", style="yellow dim") | |
| return self._merge_renderables([*renderables, streaming_result, interrupted_text]) |
Prompt To Fix With AI
This is a comment left during a code review.
Path: strix/interface/tui.py
Line: 1709-1715
Comment:
**Thinking blocks silently dropped on interrupted messages**
When a message has both `thinking_blocks` in its metadata and `metadata["interrupted"] == True`, the thinking block renderables collected in `renderables` are never included in the final output — the early `return` bypasses them entirely. This is a regression introduced by adding the thinking block logic above the `interrupted` check.
```suggestion
if metadata.get("interrupted"):
streaming_result = self._render_streaming_content(content)
interrupted_text = Text()
interrupted_text.append("\n")
interrupted_text.append("⚠ ", style="yellow")
interrupted_text.append("Interrupted by user", style="yellow dim")
return self._merge_renderables([*renderables, streaming_result, interrupted_text])
```
How can I resolve this? If you propose a fix, please make it concise.There was a problem hiding this comment.
Pull request overview
Adds real-time agent “what’s happening now” status messages to the TUI during execution, and hardens Rich Text merging to avoid span range crashes while also improving display of model “thinking” blocks.
Changes:
- Propagate per-agent live status/system messages via
Tracer.update_agent_system_message()and display them in the TUI status line. - Fix Rich
Textspan out-of-bounds issues by sanitizing spans when merging/embedding renderables. - Improve chat rendering by supporting “thinking blocks” (and fixing multi-line thought indentation).
Reviewed changes
Copilot reviewed 5 out of 5 changed files in this pull request and generated 2 comments.
Show a summary per file
| File | Description |
|---|---|
strix/agents/base_agent.py |
Emits additional lifecycle system messages (sandbox setup, thinking, tool execution, response processing). |
strix/interface/tool_components/thinking_renderer.py |
Adjusts thought rendering to indent multi-line thoughts. |
strix/interface/tui.py |
Displays live system messages, sanitizes Text spans to prevent crashes, and renders thinking blocks in history. |
strix/llm/llm.py |
Emits LLM lifecycle system messages (memory compression / provider wait / first token). |
strix/telemetry/tracer.py |
Stores system_message per agent and exposes an update method for it. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
|
||
| if thought: | ||
| text.append(thought, style="italic dim") | ||
| indented_thought = "\n ".join(thought.split("\n")) |
There was a problem hiding this comment.
thought.split("\n") won’t handle Windows newlines (\r\n) cleanly and can leave stray \r characters in the output. Using thought.splitlines() would normalize newline handling and match patterns used elsewhere in the interface renderers.
| indented_thought = "\n ".join(thought.split("\n")) | |
| indented_thought = "\n ".join(thought.splitlines()) |
| if "thinking_blocks" in metadata and metadata["thinking_blocks"]: | ||
| from strix.interface.tool_components.thinking_renderer import ThinkRenderer | ||
|
|
||
| for block in metadata["thinking_blocks"]: |
There was a problem hiding this comment.
This renderer expects thinking_blocks under msg_data["metadata"], but Tracer.log_chat_message(...) is typically called without metadata for normal assistant responses (e.g., BaseAgent logs assistant messages without attaching thinking_blocks). That means these blocks likely won’t render in practice. Consider propagating thinking_blocks into tracer chat message metadata when logging assistant messages, or adjust the TUI to also read thinking_blocks from the message root if that’s where they’re stored elsewhere.
| if "thinking_blocks" in metadata and metadata["thinking_blocks"]: | |
| from strix.interface.tool_components.thinking_renderer import ThinkRenderer | |
| for block in metadata["thinking_blocks"]: | |
| # Prefer thinking_blocks from metadata, but fall back to root-level key | |
| thinking_blocks = metadata.get("thinking_blocks") or msg_data.get("thinking_blocks") | |
| if thinking_blocks: | |
| from strix.interface.tool_components.thinking_renderer import ThinkRenderer | |
| for block in thinking_blocks: |
Add real-time status messages to the TUI showing what each agent is
doing at any given moment.
Status messages shown:
- 'Compressing memory...' during conversation history preparation
- 'Waiting for LLM provider...' during API call setup
- 'Generating response...' after first chunk received
- 'Executing {tool1}, {tool2} +N more...' during tool execution
- 'Setting up sandbox environment...' during sandbox init
Also renders thinking blocks in chat history from metadata and fixes
indented thought display for multi-line thoughts in ThinkRenderer.
62677ca to
b9474d5
Compare
2533d7b to
7a4c008
Compare
Summary
Add real-time status messages to the TUI showing what each agent is doing at any given moment. Previously agents only showed "Initializing" or a generic sweep animation.
Changes
update_agent_system_message()to Tracer for status propagationFiles Changed
strix/agents/base_agent.py(+21)strix/interface/tool_components/thinking_renderer.py(+2/-1)strix/interface/tui.py(+63/-13)strix/llm/llm.py(+19/-2)strix/telemetry/tracer.py(+6)Split from #328.