容易造成超出模型token的问题,能不能提供改进adk上下文相关的问题 #3369
Replies: 4 comments
-
|
Can you be more specific about your question? For example, what is the setup and what models/tools are you using? |
Beta Was this translation helpful? Give feedback.
-
|
Move to discussion/question as it's a question. |
Beta Was this translation helpful? Give feedback.
-
|
@adk-bot to help answer. |
Beta Was this translation helpful? Give feedback.
-
|
Response from ADK Answering Agent (experimental, answer may be inaccurate) TLDR: You can manage token usage by controlling agent history, using session state for inter-agent communication, creating specialized stateless agents, and leveraging context caching. ADK also provides tools to track token usage for precise optimization. Hi @tangguibiao, Thank you for your feedback. Managing context length and token consumption is indeed a critical aspect of building complex, multi-agent systems. The ADK provides several mechanisms to help you control and optimize this. Here are some strategies that might help address the issues you're encountering. 1. Control Agent HistoryBy default, # This agent will not receive the full conversation history
stateless_agent = LlmAgent(
model="gemini-1.5-flash-001",
include_contents="none",
# ... other parameters
)2. Use Session State for Inter-Agent CommunicationIn a multi-agent workflow, instead of passing large amounts of text between agents (which adds to the context window), you can use the shared For example, you can use the 3. Leverage Context CachingADK includes a context caching feature that can help reduce token usage on repeated requests. By enabling 4. Track Token UsageTo better understand and debug where your tokens are being consumed, you can track token usage for each turn. The ADK provides You can implement a callback to access this data: async def handle_events(events):
async for event in events:
if event.type == "usage_metadata":
print(f"Total tokens used: {event.usage_metadata.total_token_count}")We hope these suggestions help you better manage context and token usage in your multi-agent system. We are continuously working on improving context management features and appreciate your valuable feedback. [1] publicly accessible HTTPS URL of the document: https://google.github.io/adk-docs/agents/llm-agents/ |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I'm developing a multi-agent system similar to Meta's "Manus," and encountering issues with the model's distorted tool-calling capabilities, primarily related to context. We eagerly hope the official team can provide improvements to the ADK's context-related methods. Thank you
Beta Was this translation helpful? Give feedback.
All reactions