This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Graphiti is a Python framework for building temporally-aware knowledge graphs designed for AI agents. It enables real-time incremental updates to knowledge graphs without batch recomputation, making it suitable for dynamic environments.
Key features:
- Bi-temporal data model with explicit tracking of event occurrence times
- Hybrid retrieval combining semantic embeddings, keyword search (BM25), and graph traversal
- Support for custom entity definitions via Pydantic models
- Integration with Neo4j and FalkorDB as graph storage backends
- Optional OpenTelemetry distributed tracing support
# Install dependencies
uv sync --extra dev
# Format code (ruff import sorting + formatting)
make format
# Lint code (ruff + pyright type checking)
make lint
# Run tests
make test
# Run all checks (format, lint, test)
make checkcd server/
# Install server dependencies
uv sync --extra dev
# Run server in development mode
uvicorn graph_service.main:app --reload
# Format, lint, test server code
make format
make lint
make testcd mcp_server/
# Install MCP server dependencies
uv sync
# Run with Docker Compose
docker-compose up- Main Entry Point:
graphiti.py- Contains the mainGraphiticlass that orchestrates all functionality - Graph Storage:
driver/- Database drivers for Neo4j and FalkorDB - LLM Integration:
llm_client/- Clients for OpenAI, Anthropic, Gemini, Groq - Embeddings:
embedder/- Embedding clients for various providers - Graph Elements:
nodes.py,edges.py- Core graph data structures - Search:
search/- Hybrid search implementation with configurable strategies - Prompts:
prompts/- LLM prompts for entity extraction, deduplication, summarization - Utilities:
utils/- Maintenance operations, bulk processing, datetime handling
- FastAPI Service:
graph_service/main.py- REST API server - Routers:
routers/- API endpoints for ingestion and retrieval - DTOs:
dto/- Data transfer objects for API contracts
- MCP Implementation:
graphiti_mcp_server.py- Model Context Protocol server for AI assistants - Docker Support: Containerized deployment with Neo4j
- Unit Tests:
tests/- Comprehensive test suite using pytest - Integration Tests: Tests marked with
_intsuffix require database connections - Evaluation:
tests/evals/- End-to-end evaluation scripts
OPENAI_API_KEY- Required for LLM inference and embeddingsUSE_PARALLEL_RUNTIME- Optional boolean for Neo4j parallel runtime (enterprise only)- Provider-specific keys:
ANTHROPIC_API_KEY,GOOGLE_API_KEY,GROQ_API_KEY,VOYAGE_API_KEY
- Neo4j: Version 5.26+ required, available via Neo4j Desktop
- Database name defaults to
neo4j(hardcoded in Neo4jDriver) - Override by passing
databaseparameter to driver constructor
- Database name defaults to
- FalkorDB: Version 1.1.2+ as alternative backend
- Database name defaults to
default_db(hardcoded in FalkorDriver) - Override by passing
databaseparameter to driver constructor
- Database name defaults to
- Use Ruff for formatting and linting (configured in pyproject.toml)
- Line length: 100 characters
- Quote style: single quotes
- Type checking with Pyright is enforced
- Main project uses
typeCheckingMode = "basic", server usestypeCheckingMode = "standard"
- Run tests with
make testorpytest - Integration tests require database connections and are marked with
_intsuffix - Use
pytest-xdistfor parallel test execution - Run specific test files:
pytest tests/test_specific_file.py - Run specific test methods:
pytest tests/test_file.py::test_method_name - Run only integration tests:
pytest tests/ -k "_int" - Run only unit tests:
pytest tests/ -k "not _int"
The codebase supports multiple LLM providers but works best with services supporting structured output (OpenAI, Gemini). Other providers may cause schema validation issues, especially with smaller models.
OpenAI Models:
- GPT-5 Family (Reasoning models, require temperature=0):
gpt-5-mini- Fast reasoning modelgpt-5-nano- Smallest reasoning model
- GPT-4.1 Family (Standard models):
gpt-4.1- Full capability modelgpt-4.1-mini- Efficient model for most tasksgpt-4.1-nano- Lightweight model
- Legacy Models (Still supported):
gpt-4o- Previous generation flagshipgpt-4o-mini- Previous generation efficient
Anthropic Models:
- Claude 4.5 Family (Latest):
claude-sonnet-4-5-latest- Flagship model, auto-updatesclaude-sonnet-4-5-20250929- Pinned Sonnet version from September 2025claude-haiku-4-5-latest- Fast model, auto-updates
- Claude 3.7 Family:
claude-3-7-sonnet-latest- Auto-updatesclaude-3-7-sonnet-20250219- Pinned version from February 2025
- Claude 3.5 Family:
claude-3-5-sonnet-latest- Auto-updatesclaude-3-5-sonnet-20241022- Pinned version from October 2024claude-3-5-haiku-latest- Fast model
Google Gemini Models:
- Gemini 2.5 Family (Latest):
gemini-2.5-pro- Flagship reasoning and multimodalgemini-2.5-flash- Fast, efficient
- Gemini 2.0 Family:
gemini-2.0-flash- Experimental fast model
- Gemini 1.5 Family (Stable):
gemini-1.5-pro- Production-stable flagshipgemini-1.5-flash- Production-stable efficient
Note: Model names like gpt-5-mini, gpt-4.1, and gpt-4.1-mini used in this codebase are valid OpenAI model identifiers. The GPT-5 family are reasoning models that require temperature=0 (automatically handled in the code).
When working with the MCP server, follow the patterns established in mcp_server/cursor_rules.md:
- Always search for existing knowledge before adding new information
- Use specific entity type filters (
Preference,Procedure,Requirement) - Store new information immediately using
add_memory - Follow discovered procedures and respect established preferences