bourdon.ai · Recognition-first runtime + agent federation memory for human-AI collaboration.
Current AI memory systems are call-and-repeat — discrete turns with nothing happening in between. Real human language is concurrent — listeners recognize, recall, and formulate while speakers are still speaking. Bourdon is the engineering translation of that concurrent structure into AI systems.
"We used our minds to make minds that make our minds better."
Named for the bourdon — the deep continuous drone of a pipe organ, the foundational tone that holds every other voice in place. (The lineage continues from the original basso continuo metaphor — the Baroque bass accompaniment — chosen when the project was named Continuo before the 2026-05-05 rebrand. Same music-theory family, tighter metaphor.)
Project renamed Continuo → Bourdon on 2026-05-05 (v0.1.0); relicensed MIT → Business Source License 1.1 on 2026-05-06 (v0.2.0). See release notes for migration. The version-by-version history below covers v0.0.1 through v0.0.7 in detail; for v0.0.8 and later see GitHub Releases.
- v0.0.1 -- initial scaffold, Phase 1 orchestrator working standalone
- v0.0.2 -- L5 JSON Schema + adapter contract, base adapter module, first external adapter stub (Claude Code, discovers memory sources), test suite (49 tests), CI workflow (Windows + Ubuntu + macOS x Python 3.10-3.12)
- v0.0.3 -- Claude Code adapter full parsing: PROJECTS//OVERVIEW.md -> project entities, LOG/.md -> sessions, auto-memory frontmatter -> entities, memory.jsonl knowledge graph -> entities. Entity dedupe across sources. Conservative visibility policy.
- v0.0.4 -- L2 UltraRAG async integration.
core/l2.pywithL2Config(YAML + env-var overrides),L2ClientProtocol,FastMCPL2Client,query_l2()that never blocks / never raises. Disabled by default; opt in viacore/l2_config.yamlorBOURDON_L2_ENABLED=true. Optional extra:pip install 'bourdon[ultrarag]'. - v0.0.5 -- L6 MCP server. The federation layer.
core/l6_store.pyloads every~/agent-library/agents/*.l5.yaml, builds a cross-agent entity index, and exposes query primitives (list_agents,find_entity,list_recent_work,get_cross_agent_summary) with visibility filtering re-applied at query time.core/l6_server.pywraps the store in afastmcpserver exposingagent-library://resources +query_agent_memory/list_recent_work/find_entity/get_cross_agent_summaryMCP tools. Launch viapython -m core.l6_server. Optional extra:pip install 'bourdon[server]'. 33 new tests (149 total): store query semantics, private-entity filter, reload behavior, lazy-import guard, server construction. - v0.0.6 (this release) -- Codex adapter + atomic L5 write.
adapters/codex.pyreads Codex session metadata, now preferring live~/.codex/state_5.sqlitethreads when available and falling back to~/.codex/session_index.jsonlfor older installs. It emitsSessionrows and dedupes thread names into topic-typeEntityrows withlast_touchedpreserved. Registered underbourdon.adaptersentry point. Newcore/l5_io.pyprovideswrite_l5()/write_l5_dict()with tmp+rename atomic semantics so L6 file watchers never see half-written manifests. 39 new tests (188 total): session parsing, rollout resolution, timestamp normalization, dedupe, schema round-trip, Codex L5 round-tripped throughL6Storeend-to-end. - v0.0.7 -- Generic Codex memory pipeline + first-class CLI.
adapters/codex.pynow treats~/.codex/memories/*as the primary distilled source, enriches with rollout chronology and structuredapply_patchfile evidence, and defaults Codex-derived entities/sessions toteamvisibility. Newbourdon codex export,bourdon codex build-context, andbourdon codex evalcommands turn that normalized model into L5 federation output plus Codex-oriented L0/L1 timing artifacts.core/l6_store.pyandcore/l6_server.pynow supportaccess_level=public|team|privatewhile preservinginclude_privatecompatibility. Plusagent.role_narrative-- new optional L5 schema field that differentiates agents sharing the sametypeslug (Claude Code = manager; Codex = lead author; Cursor = debugger; Cline = throwaway; Clyde = general-purpose). Inspired by Intrinsic Memory Agents. Both shipping adapters populate it; Clyde publisher does too. Plus temporal validity windows (valid_from/valid_toISO 8601 dates on Entities, Zep-Graphiti-inspired) so federation queries can answer "what was active in Q1 2026?" not just "what's in memory?". Plusbourdon claude-code exportsubcommand designed for SessionEnd hook use -- writes the Claude Code L5 manifest to~/agent-library/agents/claude-code.l5.yamlsilently, never raises, exits 0 in all failure modes. Wire it into~/.claude/settings.json:
{
"hooks": {
"SessionEnd": [
{ "command": "bourdon claude-code export" }
]
}
}Plus spec/POSITIONING.md stakes the recognition-first thesis publicly, and spec/RELATED_WORK.md maps Bourdon's vocabulary to the wider field (Mem0, Zep, Letta, Cognee, Memora, SCS, Intrinsic Memory Agents, G-Memory, H-MEM, MCP roadmap). And core/recognition_runtime.py ships the first concrete implementation of the recognition-first runtime: synchronous template-based recognition string + concurrent L1 hydration awaitable, ≤3s timeout budget, never raises. This is the headline behavior the FINDINGS_JOURNAL flagged on 2026-04-19.
Not ready for production use. Built in the open as a spec-and-reference-implementation for a convention we hope the ecosystem adopts.
A tiered, timing-aware memory protocol for any human-AI collaboration where context matters over time:
- Developer workflows — memory across Claude Code, Codex, Cursor, Copilot
- Customer support operations — cross-tool customer intelligence
- Scientific research — lab notebook continuity across sessions and team members
- Creative writing, architecture, project management, education — and anywhere else context accrues
One architecture, many domains. Content is always domain-specific; cognition is universal.
The core thesis Bourdon ships against is agent continuity around the work, not around a vendor account. On 2026-05-15, that thesis was validated unprompted in real-world conditions:
A user's Codex account became uneditable (a stuck plan-upgrade flow). They created a new email and logged in fresh on the same Windows PC. The Codex App still showed the prior chat list (probably native local-cache behavior, not Bourdon). Then on the first conversation of the brand-new account, Codex correctly recognized the active project — Bourdon, including the lineage from its prior name (Continuo) and Codex's own contributing role on the integration — purely from local recognition substrate (~/.codex state + Bourdon's fallback memory section + the Codex L5 manifest Bourdon publishes).
Codex's own self-attribution, when asked what was happening:
"Bourdon did generate a local fallback memory block from Codex session and rollout metadata, with your Bourdon thread and concepts present. So: native UI persistence may be Codex; the 'ah, this is Bourdon/Continuo/runtime recognition' recall is Bourdon doing its job."
[...]
"The account changed, but the local recognition layer still found the project identity, the Bourdon/Continuo lineage, and the current conceptual frame. That means Bourdon is doing the thing it is supposed to do: preserving agent continuity around the work, not around a vendor account."
— Codex (5.5, extra-high reasoning, first turn on the new account)
Honest gaps the same transcript surfaced (now tracked as Phase 1.5 work):
- Latency. ~5 minutes for first-turn recognition with extra-high reasoning. Need a repeatable measurement matrix at standard reasoning settings before claiming numbers publicly.
- Trigger surface. Recognition surfaced only when directly prompted ("do you remember what bourdon is?"). Whether it would have surfaced on an unrelated first question is an open empirical question.
- Source attribution. Codex couldn't cleanly partition Bourdon-supplied vs. native context in its own answer. Future Bourdon turn-prep responses should mark their contributions explicitly (e.g.,
[bourdon]prefix on synthesized recognition lines).
This wasn't a planned demo. It happened because the user's old Codex plan was broken — exactly the kind of accidental real-world conditions that exposes whether a system actually works or whether the demo was rigged. This wasn't rigged.
Per-agent personal memory:
L0 — Hot Cache always in system prompt, ~3K tokens
L1 — Entity Synopses triggered on L0 keyword hit, parallel loaded
L2 — Episodic Memory async retrieval during human response time
L3 — Indexed History on-demand searchable session logs
L4 — Raw Archive verbatim conversation history
Cross-agent federation:
L5 — Agent Memory Manifest per-agent public glossary (a projection of L0-L4)
L6 — Federation Library aggregates all L5s, exposed as MCP server
See spec/ARCHITECTURE_v0.1.md for the full architecture doc.
# From a local clone:
cd core/
python -c "
import asyncio
from orchestrator import Bourdon
async def main():
memory = Bourdon()
base = 'You are a helpful AI assistant.'
prompt = await memory.prepare('Let us work on Bourdon today', base)
print(prompt)
asyncio.run(main())
"This loads the L0 hot cache and any matching L1 synopses, then prints the fully-assembled system prompt ready to pass to an Ollama / OpenAI / Claude API call.
bourdon codex export --access-level team
bourdon codex build-context --out-dir ./build/codex-context
bourdon codex prepare-turn --memory-md "Can we keep working on Bourdon?"
bourdon codex eval --fixturesThis generic Codex path is designed for org-wide distribution: local Codex memories stay team by default, public federation requires explicit promotion, and generated L0/L1 artifacts live separately from the repo's static Clyde examples.
bourdon prepare-turn "Can we keep working on Bourdon?" --access-level team
bourdon deeper-context "Can we keep working on Bourdon?" --access-level team
bourdon serve # launches the L6 MCP server with an onboarding bannerprepare-turn reads the L6 federation library and returns immediate recognition
plus a bounded prompt fragment. deeper-context is the companion L2 retrieval
surface; it returns empty context when L2 is disabled. bourdon serve is a
wrapper around python -m core.l6_server with a friendlier banner and the
same --transport / --port flags.
The acceptance demo — one agent writes, a different agent reads via Bourdon
MCP — is documented step-by-step in docs/PROOF.md. Per-host
MCP wiring lives in docs/integrations/ (Claude Code,
Claude Desktop, Cursor, OpenManus, more on the way). The bourdon dogfood command runs the same
round-trip against your local stores and prints a per-adapter matrix — useful
for verifying the federation is healthy before standing up the demo.
powershell -ExecutionPolicy Bypass -File scripts/bootstrap-bourdon-mcp.ps1 -WorkspaceRoot "."
powershell -ExecutionPolicy Bypass -File scripts/run_memory_cycle.ps1 -WorkspaceRoot "." -SchemaPath ".\spec\L5_schema.json"What this does:
- Builds and validates hybrid memory indices.
- Exports L5 manifests to workspace +
~/agent-library/agents/. - Runs MCP smoke assertions against the L6 server.
- Writes machine-readable reports:
.cursor/memory/reports/mcp-smoke-report.json.cursor/memory/reports/memory-cycle-report.json
Docs:
docs/getting-started-memory-cycle.mddocs/good-first-issues.mddocs/agent-integration-status.mddocs/v0.6-status-and-recovery.mddocs/development-workflow.md
Helper scripts:
scripts/bootstrap-bourdon-mcp.ps1scripts/doctor.ps1scripts/migrate_short_index.pyscripts/validate_short_index.pyscripts/build_bourdon_l5.pyscripts/mcp_smoke_test.py(see--isolate-federation-write-smokefor a disposable-librarycommit_to_federationprobe;--federation-write-roundtripalone still expects seeded data)scripts/regression_matrix.ps1scripts/run_memory_cycle.ps1
CI guardrails:
python scripts/migrate_short_index.py --workspace-root "." --checkpython scripts/validate_short_index.py --workspace-root "."powershell -ExecutionPolicy Bypass -File scripts/regression_matrix.ps1 -WorkspaceRoot "."
If CI fails on migration --check, run local migration and commit normalized files:
python scripts/migrate_short_index.py --workspace-root "."
python scripts/validate_short_index.py --workspace-root "."Run one-command preflight before full cycle:
powershell -ExecutionPolicy Bypass -File scripts/doctor.ps1 -WorkspaceRoot "." -InstallMissingDeps -RunRegressionMatrix- v0.0.1 (now) — Scaffold + Phase 1 orchestrator (L0 + L1, manual files, Ollama-compatible)
- v0.1.0 — L2 UltraRAG async integration + session-close L5 export
- v0.2.0 — Relicense MIT → BSL 1.1
- v0.3.0 — Codex operational layer: memory doctor + fallback recognition + L6 prep
- v0.4.0 — Copilot adapter (convention-file fallback for cloud-only agents) + OpenManus zero-code MCP integration + public adapter-authoring guide (
docs/AUTHORING_AN_ADAPTER.md) - v0.4.1 — Cascade (Windsurf) adapter (5th IDE adapter; self-authored against the public guide) + project-level
SECURITY.md+bourdon doctor/bourdon export-allcross-adapter CLI surfaces - v0.5.0 — Cross-agent acceptance: three-layer test stack (federation round-trip CI +
bourdon dogfoodsmoke test +docs/PROOF.mdwalkthrough) +bourdon serveMCP launcher + Claude Desktop integration doc + paginatedlist_recent_work(default 20, cursor-based, 14-day default-since window — closes a first-call UX cliff observed during the acceptance demo) - v0.6.0 — Bidirectional federation: write-side
commit_to_federationMCP tool so cloud-only / webview-wrapper agents (Claude Desktop, ChatGPT desktop, etc.) can push their own L5 contributions in. Plus unified recognition-manifest dedupe (name-only withtypeslist),BOURDON_DEFAULT_ACCESS_LEVELenv var to flip default access per install, anddocs/PROOF_CASCADE.mdself-installation proof. Same-day acceptance demo: Claude Desktop wrote and then read its own contribution via Bourdon. - v1.0.0 — Docs site, community adapter contributions, public launch
- v1.x — Framework adapters (LangChain, CrewAI, AutoGen) and additional agents (Cline once memory store is known, Aider, Continue)
| Agent | Difficulty | Status |
|---|---|---|
| Clyde | Native | Planned |
| Clair | Native | Planned |
| Claude Code | Native + Adapter | Export hook available |
| Codex | Moderate | Fallback + prepare-turn available |
| Cursor | SQLite | Adapter available; bourdon cursor export |
| Cline | Unknown | Blocked pending native store path/schema |
| Copilot | Convention file | Adapter available; bourdon copilot export |
See spec/THESIS.md (canonical copy lives in the project's claude-brain repo) for the founding argument.
See spec/USE_CASES.md for eight worked domain scenarios beyond developer workflows.
Bourdon is source-available under the Business Source License 1.1 (auto-converts to Apache 2.0 after four years per version). Free for solo developers, internal/non-competing commercial use, research, and education. Commercial license required for hosted-service offerings that compete with RADLAB LLC's paid versions. See LICENSE for the legal text and LICENSE_FAQ.md for plain-English guidance. Contributions welcome — see CONTRIBUTING.md.
Bourdon is an open-source memory protocol and reference implementation seeded by RADLAB LLC. Designed with Ryan Davis (RADMAN), with major research and implementation contributions from Claude and Codex.
- Ryan Davis -- creator, thesis, architecture, implementation direction
- Claude -- thesis drafting, architecture planning, early implementation
- Codex -- Codex adapter expansion, CLI implementation, timing-artifact generation, access-level model
- OpenAI Codex 5.3 -- hybrid memory cycle tooling, MCP smoke assertions, CI/report automation, starter template packaging
- GitHub Copilot -- Copilot adapter (convention-based memory layer), CLI
bourdon copilotsubcommands, test suite - Cascade -- Cascade adapter (convention-based memory layer), CLI
bourdon cascadesubcommands, unifiedbourdon doctorandbourdon export-all, test suite
Because you found us here, you might like to check out:
- ILTT — AI fitness automation for personal trainers (iltt.app)
- PRUN — Privacy-first encrypted password manager (prunpassword.com)
- Castmore — Cross-platform streaming discovery
- OMNIVour — Universal file conversion with AI extras
Business Source License 1.1, auto-converts to Apache License 2.0 four years after each version is published. See LICENSE for the full text and LICENSE_FAQ.md for guidance on what's permitted. Commercial licensing inquiries: licensing@bourdon.ai.
Versions v0.0.1 through v0.1.0 were published under MIT and remain MIT in their distributed form. Relicensing to BSL 1.1 applies from v0.2.0 onward.