-
Notifications
You must be signed in to change notification settings - Fork 5.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Draft] Add Example ChromadbMemory in Extensions #5308
base: main
Are you sure you want to change the base?
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #5308 +/- ##
==========================================
+ Coverage 70.59% 70.65% +0.05%
==========================================
Files 180 181 +1
Lines 11668 11804 +136
==========================================
+ Hits 8237 8340 +103
- Misses 3431 3464 +33
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
@rickyloynd-microsoft and I discussed this PR and PR #5227. As a user I need: We tried the cc: @ekzhu @jackgerrits |
Here is the script we tried. cc @rickyloynd-microsoft from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.ui import Console
from autogen_core.memory import ListMemory, MemoryContent, MemoryMimeType, MemoryQueryResult, UpdateContextResult
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_core.model_context import ChatCompletionContext
from autogen_core.models import SystemMessage
class CustomListMemory(ListMemory):
async def update_context(
self,
model_context: ChatCompletionContext,
) -> UpdateContextResult:
"""Update the model context by appending memory content.
This method mutates the provided model_context by adding all memories as a
SystemMessage.
Args:
model_context: The context to update. Will be mutated if memories exist.
Returns:
UpdateContextResult containing the memories that were added to the context
"""
if not self._contents:
return UpdateContextResult(memories=MemoryQueryResult(results=[]))
memory_strings = [f"{i}. {str(memory.content) + str(memory.metadata['plan']) if memory.metadata else ''}" for i, memory in enumerate(self._contents, 1)]
if memory_strings:
memory_context = "\nRelevant memory content (in chronological order):\n" + "\n".join(memory_strings) + "\n"
await model_context.add_message(SystemMessage(content=memory_context))
return UpdateContextResult(memories=MemoryQueryResult(results=self._contents))
async def main():
# Initialize user memory
# user_memory = ListMemory()
user_memory = CustomListMemory()
# Add user preferences to memory
await user_memory.add(MemoryContent(content="plan for ordering peripherals from favorite website",
metadata={"plan": "go to amazon.com > search for peripherals > order"},
mime_type=MemoryMimeType.TEXT))
await user_memory.add(MemoryContent(content="order meal from grubhub",
metadata={"plan": "go to grubhub.com > order meal"},
mime_type=MemoryMimeType.TEXT))
assistant_agent = AssistantAgent(
name="assistant_agent",
model_client=OpenAIChatCompletionClient(
model="gpt-4o-2024-08-06",
),
memory=[user_memory],
)
# Run the agent with a task.
stream = assistant_agent.run_stream(task="Create a plan to order a keyboard from my favorite website?")
await Console(stream)
if __name__ == "__main__":
import asyncio
asyncio.run(main()) |
@gagb , thanks for the excellent example above - it indeed is in the spirit of the Memory interface. It would be great to figure out what is needed to cover functionality that is left. Is your question on consistency related when do we use the
What do you think? |
Yeah I agree with you. I think it would be great if Ricky's PR was using this interface. And if that is not possible there need to be a clear reason. |
Why are these changes needed?
Shows an example of how to use the
Memory
interface to implement a just-in-time vector memory based on chromadb.Related issue number
Checks