You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In deep agent workflows, each sub‐agent automatically performs an LLM
step to summarize its tool calls before returning to its parent. This
leads to:
1. Excessive latency: every nested agent invokes the LLM, compounding
delays.
2. Loss of raw tool data: summaries may strip out details the top‐level
agent needs.
We discovered that `Agent.as_tool(...)` already accepts an
(undocumented) `custom_output_extractor` parameter. By providing a
callback, a parent agent can override what the sub‐agent returns e.g.
hand back raw tool outputs or a custom slice so that only the final
agent does summarization.
---
This PR adds a “Custom output extraction” section to the Markdown docs
under “Agents as tools,” with a minimal code example.
# Scan the agent’s outputs in reverse order until we find a JSON-like message from a tool call.
300
+
for item inreversed(run_result.new_items):
301
+
ifisinstance(item, ToolCallOutputItem) and item.output.strip().startswith("{"):
302
+
return item.output.strip()
303
+
# Fallback to an empty JSON object if nothing was found
304
+
return"{}"
305
+
306
+
307
+
json_tool = data_agent.as_tool(
308
+
tool_name="get_data_json",
309
+
tool_description="Run the data agent and return only its JSON payload",
310
+
custom_output_extractor=extract_json_payload,
311
+
)
312
+
```
313
+
287
314
## Handling errors in function tools
288
315
289
316
When you create a function tool via `@function_tool`, you can pass a `failure_error_function`. This is a function that provides an error response to the LLM in case the tool call crashes.
0 commit comments