Skip to content

Conversation

acivan
Copy link

@acivan acivan commented Oct 18, 2025

This pull request introduces a new example for using Memori with a custom OpenAI client and improves the handling of internal agent calls by switching from pattern-based detection in message content to a more reliable metadata flag approach. It also consistently tags internal memory processing calls with a specific metadata flag across memory and retrieval agents.

Key changes:

New Example:

  • Added examples/supported_llms/openai_custom_example.py, which demonstrates how to integrate Memori with a custom OpenAI client, including environment variable setup, memory tracking, and interactive chat.

Internal Call Detection Improvements:

  • Updated _is_internal_agent_call in memori/integrations/openai_integration.py to check for internal processing using the metadata field instead of searching for patterns in message content, reducing false positives and improving reliability.

Consistent Internal Metadata Tagging:

  • Added the "INTERNAL_MEMORY_PROCESSING" metadata tag to all internal memory processing calls in both MemoryAgent (memori/agents/memory_agent.py) and RetrievalAgent (memori/agents/retrieval_agent.py), ensuring these calls are properly identified as internal. [1] [2] [3] [4]

@harshalmore31
Copy link
Collaborator

@acivan Thanks for the contribution, this is more robust architectural solution to the infinite loop problem, if possible can you adapt the tests cases similar to the #115 to verify the metadata-based approach in PR, This would give us both the proper fix AND thorough test coverage

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants