Skip to content

Conversation

@fengju0213
Copy link
Collaborator

@fengju0213 fengju0213 commented Sep 30, 2025

Description

  • Context creator now preserves the system message and returns the remaining history strictly by
    timestamp, removing all per-message token bookkeeping.

    • Token-limit handling captures the full context, rolls back recent tool-call chains when
      necessary, swaps in an assistant summary, and records summary state (depth, new-record counts,
      last user input). This state blocks back-to-back summaries with negligible progress and caps
      retries at three, preventing summarize→retry loops—even when the immediate overflow comes from a
      tool call.
    • Passing token_limit into ChatAgent now logs a deprecation warning and ignores the value; callers
      should control limits through model backend configuration.

    Pending work:
    1.clean up the modules related to token_limit and the token counter.
    2.Add unit tests that cover input strings capable of triggering the token-limit path.

Checklist

Go over all the following points, and put an x in all the boxes that apply.

  • I have read the CONTRIBUTION guide (required)
  • I have linked this PR to an issue using the Development section on the right sidebar or by adding Fixes #issue-number in the PR description (required)
  • I have checked if any dependencies need to be added or updated in pyproject.toml and uv lock
  • I have updated the tests accordingly (required for a bug fix or a new feature)
  • I have updated the documentation if needed:
  • I have added examples if this is a new feature

If you are unsure about any of these, don't hesitate to ask. We are here to help!

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Sep 30, 2025

Important

Review skipped

Auto reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch enhance_update_memory

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@Wendong-Fan Wendong-Fan marked this pull request as draft October 5, 2025 09:07
@Wendong-Fan
Copy link
Member

convert to draft as this PR hasn't updated with the approach we discussed

@fengju0213 fengju0213 changed the title feat: retry when tokenlimit and enhance chunking logic feat: summarize when tokenlimit Oct 14, 2025
@fengju0213
Copy link
Collaborator Author

Context creator now preserves the system message and returns the remaining history strictly by
timestamp, removing all per-message token bookkeeping.

Token-limit handling captures the full context, rolls back recent tool-call chains when
necessary, swaps in an assistant summary, and records summary state (depth, new-record counts,
last user input). This state blocks back-to-back summaries with negligible progress and caps
retries at three, preventing summarize→retry loops—even when the immediate overflow comes from a
tool call.
Passing token_limit into ChatAgent now logs a deprecation warning and ignores the value; callers
should control limits through model backend configuration.
Pending work:
1.clean up the modules related to token_limit and the token counter.
2.Add unit tests that cover input strings capable of triggering the token-limit path.

@Wendong-Fan Wendong-Fan marked this pull request as ready for review October 14, 2025 08:32
@hesamsheikh
Copy link
Collaborator

Thanks for the thorough PR @fengju0213. I wonder if the new implementation would cause inconsistencies and fragmentation of the summarization logic already implemented in two places.

  1. ChatAgent.summarize() so the user can explicitly call to save the summarization.
summary_result = agent.summarize(filename="meeting_notes")
# Returns: {"summary": str, "file_path": str, "status": str}

This is mostly used to create workflow.md files of a session

  1. ContextSummarizerToolkit user "USER" as the rolename, whereas here "ASSISTANT" is used.

@fengju0213
Copy link
Collaborator Author

Thanks for the thorough PR @fengju0213. I wonder if the new implementation would cause inconsistencies and fragmentation of the summarization logic already implemented in two places.

  1. ChatAgent.summarize() so the user can explicitly call to save the summarization.
summary_result = agent.summarize(filename="meeting_notes")
# Returns: {"summary": str, "file_path": str, "status": str}

This is mostly used to create workflow.md files of a session

  1. ContextSummarizerToolkit user "USER" as the rolename, whereas here "ASSISTANT" is used.

thanks for reviewing! @hesamsheikh We can probably unify the naming after rolename later. For now, could you help review and summarize the current logic for handling the token limit?

Copy link
Collaborator

@MuggleJinx MuggleJinx left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @fengju0213!! Left some comments. Maybe should also consider adding a test case when SUMMARY_MAX_DEPTH is reached.

@fengju0213 fengju0213 force-pushed the enhance_update_memory branch from b1885f3 to bc69be7 Compare October 21, 2025 09:53
@Wendong-Fan
Copy link
Member

Wendong-Fan commented Oct 21, 2025

token counter

context 1,000,000 token

context/2 -> 500,000 token 10s

compress -> summarized memeory ( 100,000 token)
....

(1,000,000 -100,000) /2 +100,000 -> 550,000

compress -> summarized memeory ( 100,000 token)

(1,000,000 -100,000-100,000)) /2 +100,000 +100,000 -> 600,000
....

need to set the minium value for content compression

@fengju0213 fengju0213 force-pushed the enhance_update_memory branch from 97c5376 to bc69be7 Compare October 21, 2025 10:21
@fengju0213 fengju0213 requested a review from MuggleJinx October 22, 2025 04:05
Copy link
Collaborator

@hesamsheikh hesamsheikh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR @fengju0213. I have added a few comments. As @MuggleJinx mentioned, i also think a few more test cases are a good idea.

Comment on lines +1021 to +1023
summary_msg = BaseMessage.make_assistant_message(
role_name="Assistant", content=content
)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

here Assistant is used as role_name while in the ContextSummarizerToolkit USER role is used. Are you sure adding the summary as Assistant message is the best approach?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

actually,i haven't compared two method in detail,do you have any suggestion?

# Add new summary
new_summary_msg = BaseMessage.make_assistant_message(
role_name="Assistant",
content=summary_with_prefix + " " + summary_status,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is summary_status added directly to the summary? seems like a big.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will fix,should be file_path

for msg in messages:
content = msg.get('content', '')
if isinstance(content, str) and content.startswith(
'[CAMEL_SUMMARY]'
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think [CAMEL_SUMMARY] allone is a good semantic prefix for the agent's summary.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how about CONTEXT_SUMMARY

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we want to actually send this to the LLM it must be more semantic, explaining what this summary is and how to treat it. For example: "The following is a summary of our conversation from a previous session, ...."

f"Performing full compression."
)
# Summarize everything (including summaries)
summary = self.summarize(include_summaries=True)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's actually best we don't rely on the default prompt which is too generic. maybe mentioning why this summarization is happening would be a good idea. especially when using the ASSISTANT as the role the tone must be adjusted, and the default prompt doesn't do that.

I've ran a simple script to see how the summarization turns out.

CONTEXT BEFORE SUMMARIZATION (13 messages, 525 tokens)

Message 1 [SYSTEM]: You are a helpful assistant.
Message 2 [USER]: Hi! I'm working on a Python project and need help with data analysis.
Message 3 [ASSISTANT]: Hello! I'd be happy to help you with data analysis in Python...
Message 4 [USER]: I have a CSV file with customer sales data...
Message 5 [ASSISTANT]: Great! That's a common dataset structure. To analyze this data, we can use pandas...
Message 6 [USER]: Ok loaded it. Now I need to find the top 5 customers by total revenue.
Message 7 [ASSISTANT]: Perfect! To find the top 5 customers by revenue, we need to: 1. Calculate revenue...
Message 8 [USER]: That worked! But I noticed some customer IDs appear multiple times. Is that normal?
Message 9 [ASSISTANT]: Yes, that's completely normal! Each row represents a single transaction...
Message 10 [USER]: Makes sense. Can you also help me create a bar chart of the top 5 customers?
Message 11 [ASSISTANT]: Absolutely! We can use matplotlib for that...
Message 12 [USER]: Perfect! One more thing - how do I save this chart to a file?
Message 13 [ASSISTANT]: You can save the plot using plt.savefig() before plt.show()...

CONTEXT AFTER SUMMARIZATION (3 messages, 228 tokens)

Message 1 [SYSTEM]:
You are a helpful assistant.

Message 2 [ASSISTANT]:
[CAMEL_SUMMARY] - Dataset: CSV file with columns: customer_id, product_name, quantity, price

  • Tools used: pandas for data manipulation, matplotlib for visualization
  • Key actions:
    • Loaded CSV data using pd.read_csv()
    • Calculated revenue per row as quantity * price
    • Grouped data by customer_id and summed revenue to find total revenue per customer
    • Sorted and selected top 5 customers by total revenue
    • Created a bar chart to visualize top 5 customers' revenue using matplotlib
    • Saved the bar chart to a PNG file with high resolution and proper layout
  • Clarification: Multiple entries per customer_id are expected as each row is a separate transaction
  • Next steps / Action items:
    • Use provided code snippets to perform analysis and visualization
    • Save charts as image files for reporting or presentation success
      ^^^^^^^ BUG!

Message 3 [USER]:
Thanks! Can you also show me how to add a legend to the chart?

@Wendong-Fan Wendong-Fan added the Waiting for Update PR has been reviewed, need to be updated based on review comment label Oct 23, 2025
@hesamsheikh
Copy link
Collaborator

Hey @fengju0213 just as a reference that might be helpful, I ran Claude Code until it summarized the full context, and here is how it looks:

This session is being continued from a previous conversation that ran out of context. The conversation is summarized below:
Analysis:
Let me chronologically analyze this conversation about improving the xxx:

Summary:
1. **Primary Request and Intent**:
   [list]

2. **Key Technical Concepts**:
   [list]

3. **Files and Code Sections**:
[list]

4. **Errors and Fixes**:
[list]

5. **Problem Solving**:
   [list]

6. **All User Messages**:
   - "in the landing page put the brain more to the right, and space out the two bands a bit more to make room for the left card"
   - "the bands and the left card are not symmetric, the bottom band is too close to it"
   - "as the mouse moves, the position of bands changes too. don't do that."
   - etc

7. **Pending Tasks**:
   - None explicitly stated

8. **Current Work**:
   

9. **Optional Next Step**:
   xxxx
Please continue the conversation from where we left it off without asking the user any further questions. Continue with the last task that you were asked to work on.

I think there are helpful patterns here we can learn from. moreover, if you notice the tone of the summary (like "Continue with the last task...") you notice the role cannot be Assistant here. I think the best approach would be to append the summary to the system message and wipe the memory.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Waiting for Update PR has been reviewed, need to be updated based on review comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants