π― AI Context + Project Intelligence: Bridge disconnected AI sessions with persistent project memory and automatic session handoff β with full GitHub workflow integration.
GitHub β’ Wiki β’ Changelog β’ Release Article
π Quick Deploy:
- npm Package -
npm install -g memory-journal-mcp - Docker Hub - Alpine-based with full semantic search
When managing large projects with AI assistance, you face a critical challenge:
- Thread Amnesia - Each new AI conversation starts from zero, unaware of previous work.
- Lost Context - Decisions, implementations, and learnings scattered across disconnected threads.
- Repeated Work - AI suggests solutions you've already tried or abandoned.
- Context Overload - Manually copying project history into every new conversation.
Memory Journal solves this by acting as your project's long-term memory, bridging the gap between fragmented AI sessions.
Experience true context-aware development:
- "Why did we choose SQLite over Postgres for this service last month?" (Semantic search)
- "Run the
/issue-triageworkflow on the top priority ticket in the Kanban board." (GitHub operations) - "Who has been touching the auth module recently, and what's our team collaboration density?" (Team analytics)
- "Close issue #42 and log an entry explaining our architectural fix for the parsing bug." (Context lifecycles)
- "Draw a visual graph showing how my last 10 architectural decisions relate to each other." (Knowledge graph)
See complete examples & prompts β
70 MCP Tools Β· 17 Workflow Prompts Β· 36 Resources Β· 10 Tool Groups Β· Code Mode Β· GitHub Commander (Issue Triage, PR Review, Milestone Sprints, Security/Quality/Perf Audits) Β· GitHub Integration (Issues, PRs, Actions, Kanban, Milestones, Insights) Β· Team Collaboration (Shared DB, Vector Search, Cross-Project Insights, Hush Protocol Flags)
| Feature | Description |
|---|---|
| Session Intelligence | Agents auto-query project history, create entries at checkpoints, and hand off context between sessions via /session-summary and team-session-summary |
| GitHub Integration | 18 tools for Issues, PRs, Actions, Kanban, Milestones (%), Copilot Reviews, and 14-day Insights |
| Dynamic Project Routing | Seamlessly switch contexts and access CI/Issue tracking across multiple repositories using a single server instance via PROJECT_REGISTRY |
| Knowledge Graphs | 8 relationship types linking specs β implementations β tests β PRs with Mermaid visualization |
| Hybrid Search | Reciprocal Rank Fusion combining FTS5 keywords, semantic vector similarity, auto-heuristics, and date-range filters |
| Code Mode | Execute multi-step operations in a secure sandbox β up to 90% token savings via mj.* API |
| Configurable Briefing | 15 env vars / CLI flags control memory://briefing content β entries, team, GitHub detail, skills awareness, chronological grounding |
| Reports & Analytics | Standups, retrospectives, PR summaries, digests, period analyses, and milestone tracking |
| Hush Protocol (Flags) | Replace Slack/Teams noise with structured, actionable, and searchable AI flags (blockers, reviews) that automatically surface in session briefings |
| Team Collaboration | 25 tools with full parity β CRUD, vector search, relationship graphs, cross-project insights, author attribution, Hush Protocol flags |
| Data Interoperability | Bidirectional Markdown roundtripping, unified IO namespace, and schema-safe JSON exports with hard bounds-checked path traversal defenses |
| Backup & Restore | One-command backup/restore with automated scheduling, retention policies, and safety-net auto-backups |
| Security & Transport | OAuth 2.1 (RFC 9728/8414, JWT/JWKS, scopes), Streamable HTTP + SSE, rate limiting, CORS, SQL injection prevention, non-root Docker |
| Structured Error Handling | Every tool returns {success, error, code, category, suggestion, recoverable} β agents get classification, remediation hints, and recoverability signals |
| Agent Collaboration | IDE agents and Copilot share context; review findings become searchable knowledge; agents suggest reusable rules and skills (setup) |
| Native Agent Skills | Bundled foundational coding paradigms (autonomous-dev, python, docker, tailwind-css, golang, playwright-standard, etc.) establishing permanent AI behavior and architecture rules |
| GitHub Commander | Pipeline skills for issue triage, PR reviews, sprint milestones, and security/quality/performance audits with journal trails (docs) |
flowchart TB
subgraph Session["π€ AI Session Start"]
Briefing["π Read Briefing<br/>(memory://briefing)"]
end
subgraph Core["π Journal Operations"]
Create["Create Entry"]
Retrieve["Retrieve & Search"]
Link["Link Entries"]
end
subgraph Search["π Hybrid Search"]
FTS["Keyword (FTS5)"]
Semantic["Semantic (Vector)"]
DateRange["Date Range"]
RRF["Reciprocal Rank Fusion"]
end
subgraph GitHub["π GitHub Integration"]
Issues["Issues & Milestones"]
PRs["Pull Requests"]
Actions["GitHub Actions"]
Kanban["Kanban Boards"]
Insights["Repository Insights"]
end
subgraph Outputs["π Outputs"]
Reports["Standups & Retrospectives"]
Graphs["Knowledge Graphs"]
Timeline["Project Timelines"]
end
Session --> Core
Core --> Search
Core <--> GitHub
Search --> Outputs
GitHub --> Outputs
π€ Recommended AI Agent Instructions/Rule
Suggested Rule (Add to AGENTS.md, GEMINI.md, system prompts, etc.)
π MANDATORY SESSION START ROUTINE
Execute BEFORE fulfilling any user request in a new session:
- TARGET: Infer
repo_namefrom the active workspace context or user prompt. If the task is not associated with a specific project, fallback to using the generic resource without a repo name (which defaults to the first registered workspace). - FETCH: Use the MCP
read_resourcetool (Server:memory-journal-mcp) to readmemory://briefing/{repo_name}(ormemory://briefingif falling back).- RESTRICTION: Do NOT use
execute_codefor this step.
- RESTRICTION: Do NOT use
- RENDER TABLE: Parse the briefing JSON and output a dense 2-column Markdown Table (Field, Value) capturing the core context.
- RESTRICTION: NO bulleted lists. Do NOT truncate issues or summaries.
- FORMATTING: Group related properties to save vertical space. Use
<br>tags for inner-cell line breaks. - REQUIRED GROUPS:
- GitHub: Combine Repo, Branch, CI, PRs, and Insights.
- GitHub Issues: List every issue, one per line.
- Active Flags (Hush Protocol): If the briefing JSON contains an
activeFlagsobject (withcount > 0), render each flag in a dedicated row using format:π© {flag_type} β @{target_user}: {preview}. Ifcountis 0 or the field is absent, omit the row entirely. - Also include Entry Counts (Journal/Team), Latest Entries/Summaries (titles only), Proactive Analytics/Team Density, Milestones, and Workspaces.
- FLAG PROMINENCE: When
activeFlags.count > 0, prepend a bold callout line above the table:β οΈ **{count} active flag(s)** β review before proceeding.This ensures blockers and review requests are impossible to miss.
- STOP & WAIT: Do NOT autonomously resume past tasks or start work on new issues mentioned in the session summary. The briefing is strictly for context.
Important
All shortcuts and tool groups include Code Mode (mj_execute_code) by default for token-efficient operations. To exclude it, add -codemode to your filter: --tool-filter starter,-codemode
Control which tools are exposed via MEMORY_JOURNAL_MCP_TOOL_FILTER (or CLI: --tool-filter):
| Filter | Tools | Use Case |
|---|---|---|
full |
70 | All tools (default) |
starter |
~11 | Core + search + codemode |
essential |
~7 | Minimal footprint |
readonly |
18 | Disable all mutations |
-github |
52 | Exclude a group |
-github,-analytics |
48 | Exclude multiple groups |
Filter Syntax: shortcut or group or tool_name (whitelist mode) Β· -group (disable group) Β· -tool (disable tool) Β· +tool (re-enable after group disable)
Custom Selection: List individual tool names to create your own whitelist: --tool-filter "create_entry,search_entries,semantic_search"
Groups: core, search, analytics, relationships, io, admin, github, backup, team, codemode
Complete tool filtering guide β
| Group | Tools | Description |
|---|---|---|
codemode |
1 | Code Mode (sandboxed code execution) π Recommended |
core |
6 | Entry CRUD, tags, test |
search |
4 | Text search, date range, semantic, vector stats |
analytics |
2 | Statistics, cross-project insights |
relationships |
2 | Link entries, visualize graphs |
io |
3 | JSON/Markdown export and File-level Markdown Data Integration Interoperability (Import/Export) |
admin |
5 | Update, delete, rebuild/add to vector index, merge tags |
github |
18 | Issues, PRs, context, Kanban, Milestones, Insights, issue lifecycle, Copilot Reviews |
backup |
4 | Backup, list, restore, cleanup |
team |
25 | CRUD, search, stats, relationships, IO (Markdown import/export), backup, vector search, cross-project insights, matrix, Hush Protocol flags (requires TEAM_DB_PATH) |
find-related- Discover connected entries via semantic similarityprepare-standup- Daily standup summariesprepare-retro- Sprint retrospectivesweekly-digest- Day-by-day weekly summariesanalyze-period- Deep period analysis with insightsgoal-tracker- Milestone and achievement trackingget-context-bundle- Project context with Git/GitHub/Kanbanget-recent-entries- Formatted recent entriesproject-status-summary- GitHub Project status reportspr-summary- Pull request journal activity summarycode-review-prep- Comprehensive PR review preparationpr-retrospective- Completed PR analysis with learningsactions-failure-digest- CI/CD failure analysisproject-milestone-tracker- Milestone progress trackingconfirm-briefing- Acknowledge session context to usersession-summary- Create a session summary entry with accomplishments, pending items, and next-session contextteam-session-summary- Create a retrospective team session summary entry securely isolated to the team databaseload-project-kanban- Dynamic project board injection
Static Resources (appear in resource lists):
memory://briefing- Session initialization: compact context for AI agents (~300 tokens) β includeslocalTimeand optionalactiveFlagsmemory://instructions- Behavioral guidance: complete server instructions for AI agentsmemory://recent- 10 most recent entriesmemory://significant- Significant milestones and breakthroughsmemory://graph/recent- Live Mermaid diagram of recent relationshipsmemory://health- Server health & diagnosticsmemory://graph/actions- CI/CD narrative graphmemory://actions/recent- Recent workflow runsmemory://tags- All tags with usage countsmemory://statistics- Journal statisticsmemory://rules- User rules file content for agent awarenessmemory://workflows- Available agent workflows summarymemory://skills- Agent skills index (names, paths, excerpts)memory://github/status- GitHub repository status overviewmemory://github/insights- Repository stars, forks, and 14-day traffic summarymemory://github/milestones- Open milestones with completion percentagesmemory://team/recent- Recent team entries with author attributionmemory://team/statistics- Team entry counts, types, and author breakdownmemory://help- Tool group index with descriptions and tool countsmemory://help/gotchas- Field notes, edge cases, and critical usage patternsmemory://metrics/summary- Aggregate tool call metrics since server start (calls, errors, token estimates, duration) β HIGH prioritymemory://metrics/tokens- Per-tool token usage breakdown sorted by output token cost β MEDIUM prioritymemory://metrics/system- Process-level metrics: memory (MB), uptime (s), Node.js version, platform β MEDIUM prioritymemory://metrics/users- Per-user call counts (populated when OAuth user identifiers are present) β LOW prioritymemory://audit- Last 50 write/admin tool call entries from the JSONL audit log (requiresAUDIT_LOG_PATH)memory://flags- Active (unresolved) team flags dashboard (requiresTEAM_DB_PATH)memory://flags/vocabulary- Configured flag vocabulary terms
Template Resources (require parameters, fetch directly by URI):
memory://projects/{number}/timeline- Project activity timelinememory://issues/{issue_number}/entries- Entries linked to issuememory://prs/{pr_number}/entries- Entries linked to PRmemory://prs/{pr_number}/timeline- Combined PR + journal timelinememory://kanban/{project_number}- GitHub Project Kanban boardmemory://kanban/{project_number}/diagram- Kanban Mermaid visualizationmemory://milestones/{number}- Milestone detail with completion progressmemory://help/{group}- Per-group tool reference with parameters and annotationsmemory://briefing/{repo}- Context targeted to a specific repository
Note: The memory://github/status, memory://github/insights, memory://github/milestones, and memory://milestones/{number} resources also accept an optional /{repo} path suffix for cross-repo targeting.
Code Mode (mj_execute_code) is a revolutionary approach that dramatically reduces token usage by up to 90% and is included by default in all presets. Instead of spending thousands of tokens on sequential tool calls, AI agents use a single sandboxed execution to reason faster.
Code executes in a sandboxed VM context with multiple layers of security. All mj.* API calls execute against the journal within the sandbox, providing:
- Static code validation β blocked patterns include
require(),process,eval(), and filesystem access - Rate limiting β 60 executions per minute per client
- Hard timeouts β configurable execution limit (default 30s)
- Full API access β all 10 tool groups are available via
mj.*(e.g.,mj.core.createEntry(),mj.search.searchEntries(),mj.github.getGithubIssues(),mj.team.passTeamFlag()) - Strict Readonly Contract β Calling any mutation method under
--tool-filter readonlysafely halts the sandbox to prevent execution, returning a structured{ success: false, error: "..." }response to the agent instead of a raw MCP protocol exception.
Run with only Code Mode enabled β a single tool that provides access to all 69 tools' worth of capability through the mj.* API:
{
"mcpServers": {
"memory-journal-mcp": {
"command": "memory-journal-mcp",
"args": ["--tool-filter", "codemode"]
}
}
}This exposes just mj_execute_code. The agent writes JavaScript against the typed mj.* SDK β composing operations across all 10 tool groups and returning exactly the data it needs β in one execution. This mirrors the Code Mode pattern pioneered by Cloudflare for their entire API: fixed token cost regardless of how many capabilities exist.
If you prefer individual tool calls, exclude codemode:
{
"args": ["--tool-filter", "starter,-codemode"]
}The Hush Protocol reimagines team collaboration for AI-augmented workflows by replacing noisy Slack/Teams messages with structured, machine-actionable flags.
When you encounter a blocker, need a review, or want to broadcast a milestone, your AI agent can raise a flag in the shared Team Database:
- Actionable Visibility: Active flags automatically surface at the very top of the
memory://briefingpayload for all team members. When another developer's agent starts a session, it immediately sees your blockers and can help resolve them autonomously. - Structured Types: Raise specific flag types (
blocker,needs_review,help_requested,fyi). You can customize your team's vocabulary via the--flag-vocabularyconfiguration. - Searchable History: Unlike chat messages that disappear into the void, Hush flags are permanent, query-able AI journal entries. Your agents can search past
needs_reviewflags to understand how architectural blockers were conquered.
Dashboard & Operations: Read memory://flags to see an active dashboard overview and use mj.team.passTeamFlag() / mj.team.resolveTeamFlag() to manage them programmatically in Code Mode.
npm install -g memory-journal-mcpgit clone https://github.com/neverinfamous/memory-journal-mcp.git
cd memory-journal-mcp
npm install
npm run buildAdd this to your ~/.cursor/mcp.json, Claude Desktop config, or equivalent:
{
"mcpServers": {
"memory-journal-mcp": {
"command": "memory-journal-mcp",
"env": {
"GITHUB_TOKEN": "ghp_your_token_here",
"PROJECT_REGISTRY": "{\"my-repo\":{\"path\":\"/path/to/your/git/repo\",\"project_number\":1}}"
}
}
}
}Showcasing the full power of the server, including Multi-Project Routing, Team Collaboration, Copilot awareness, and Context Injections.
{
"mcpServers": {
"memory-journal-mcp": {
"command": "memory-journal-mcp",
"env": {
"DB_PATH": "/path/to/your/memory_journal.db",
"TEAM_DB_PATH": "/path/to/shared/team.db",
"GITHUB_TOKEN": "ghp_your_token_here",
"PROJECT_REGISTRY": "{\"my-repo\":{\"path\":\"/path/to/repo\",\"project_number\":1},\"other-repo\":{\"path\":\"/path/to/other\",\"project_number\":5}}",
"AUTO_REBUILD_INDEX": "true",
"MEMORY_JOURNAL_MCP_TOOL_FILTER": "codemode",
"BRIEFING_ENTRY_COUNT": "3",
"BRIEFING_SUMMARY_COUNT": "1",
"BRIEFING_INCLUDE_TEAM": "true",
"BRIEFING_ISSUE_COUNT": "3",
"BRIEFING_PR_COUNT": "3",
"BRIEFING_PR_STATUS": "true",
"BRIEFING_WORKFLOW_COUNT": "3",
"BRIEFING_WORKFLOW_STATUS": "true",
"BRIEFING_COPILOT_REVIEWS": "true",
"RULES_FILE_PATH": "/path/to/your/RULES.md",
"SKILLS_DIR_PATH": "/path/to/your/skills",
"MEMORY_JOURNAL_WORKFLOW_SUMMARY": "/deploy: prod deployment | /audit: security scan"
}
}
}
}π‘ Tip: Optimize your context window! Journal entries (
BRIEFING_ENTRY_COUNT) capture frequent, granular actions (e.g. bug fixes, implementation steps). Session summaries (BRIEFING_SUMMARY_COUNT) surface high-level retrospectives meant to pass strategic context continuously across distinct AI sessions. Use both appropriately to keep the agent briefing highly focused!
Variants (modify the config above):
| Variant | Change |
|---|---|
| Minimal (no GitHub) | Remove the env block entirely |
| npx (no install) | Replace "command" with "npx" and add "args": ["-y", "memory-journal-mcp"] |
| From source | Replace "command" with "node" and add "args": ["dist/cli.js"] |
| Code Mode only | Add "args": ["--tool-filter", "codemode"] (single tool, all capabilities) |
| Docker | Replace "command" with "docker" and use run -i --rm -v ./data:/app/data writenotenow/memory-journal-mcp:latest as args |
| Team collaboration | Add "TEAM_DB_PATH": "./team.db" to env |
Restart your MCP client and start journaling!
For remote access or web-based clients, run the server in HTTP mode:
memory-journal-mcp --transport http --port 3000To bind to all interfaces (required for containers) and enable the automated proactive analytics scheduler (e.g. daily digest):
memory-journal-mcp --transport http --port 3000 --server-host 0.0.0.0 --digest-interval 1440Endpoints:
| Endpoint | Description | Mode |
|---|---|---|
GET / |
Server info and available endpoints | Both |
POST /mcp |
JSON-RPC requests (initialize, tools/call, etc.) | Both |
GET /mcp |
SSE stream for server-to-client notifications | Stateful |
DELETE /mcp |
Session termination | Stateful |
GET /sse |
Legacy SSE connection (MCP 2024-11-05) | Stateful |
POST /messages |
Legacy SSE message endpoint | Stateful |
GET /health |
Health check ({ status, timestamp }) |
Both |
GET /.well-known/oauth-protected-resource |
RFC 9728 Protected Resource Metadata | Both |
Session Management: The server uses stateful sessions by default. Include the mcp-session-id header (returned from initialization) in subsequent requests.
- OAuth 2.1 β RFC 9728/8414, JWT/JWKS, granular scopes (opt-in via
--oauth-enabled) - 7 Security Headers β CSP, HSTS (opt-in), X-Frame-Options, and more
- Rate Limiting β 100 req/min per IP Β· CORS β configurable multi-origin (exact-match) Β· 1MB body limit
- Server Timeouts β Request (120s), keep-alive (65s), headers (66s) Β· 404 handler Β· Cross-protocol guard
- Build Provenance Β· SBOM Β· Supply Chain Attestations Β· Non-root execution
Example with curl:
Initialize session (returns mcp-session-id header):
curl -X POST http://localhost:3000/mcp \
-H "Content-Type: application/json" \
-H "Accept: application/json, text/event-stream" \
-d '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{"protocolVersion":"2025-03-26","capabilities":{},"clientInfo":{"name":"test","version":"1.0"}}}'List tools (with session):
curl -X POST http://localhost:3000/mcp \
-H "Content-Type: application/json" \
-H "Accept: application/json, text/event-stream" \
-H "mcp-session-id: YOUR_SESSION_ID" \
-d '{"jsonrpc":"2.0","id":2,"method":"tools/list","params":{}}'For serverless deployments (Lambda, Workers, Vercel), use stateless mode:
memory-journal-mcp --transport http --port 3000 --stateless| Mode | Progress Notifications | Legacy SSE | Serverless |
|---|---|---|---|
| Stateful (default) | β Yes | β Yes | |
Stateless (--stateless) |
β No | β No | β Native |
When running in HTTP/SSE mode, enable periodic maintenance jobs with CLI flags. These jobs run in-process on setInterval β no external cron needed.
Note: These flags are ignored for stdio transport because stdio sessions are short-lived (tied to your IDE session). For stdio, use OS-level scheduling (Task Scheduler, cron) or run the backup/cleanup tools manually.
memory-journal-mcp --transport http --port 3000 \
--backup-interval 60 --keep-backups 10 \
--vacuum-interval 1440 \
--rebuild-index-interval 720| Flag | Default | Description |
|---|---|---|
--backup-interval <min> |
0 (off) | Create timestamped database backups and prune old ones automatically |
--keep-backups <count> |
5 | Max backups retained during automated cleanup |
--vacuum-interval <min> |
0 (off) | Run PRAGMA optimize and flush database to disk |
--rebuild-index-interval <min> |
0 (off) | Full vector index rebuild to maintain semantic search quality |
Each job is error-isolated β a failure in one job won't affect the others. Scheduler status (last run, result, next run) is visible via memory://health.
The GitHub tools (get_github_issues, get_github_prs, etc.) auto-detect the repository from your git context when PROJECT_REGISTRY is configured or the MCP server is run inside a git repository.
| Environment Variable | Description |
|---|---|
DB_PATH |
Database file location (CLI: --db; default: ./memory_journal.db) |
TEAM_DB_PATH |
Team database file location (CLI: --team-db) |
TEAM_AUTHOR |
Override author name for team entries (default: git config user.name) |
GITHUB_TOKEN |
GitHub personal access token for API access |
DEFAULT_PROJECT_NUMBER |
Default GitHub Project number for auto-assignment when creating issues |
PROJECT_REGISTRY |
JSON map of repos to { path, project_number } for multi-project auto-detection and routing |
AUTO_REBUILD_INDEX |
Set to true to rebuild vector index on server startup |
MCP_HOST |
Server bind host (0.0.0.0 for containers, default: localhost) |
MCP_AUTH_TOKEN |
Bearer token for HTTP transport authentication (CLI: --auth-token) |
MCP_CORS_ORIGIN |
Allowed CORS origins for HTTP transport, comma-separated (default: *) |
MCP_RATE_LIMIT_MAX |
Max requests per minute per client IP, HTTP only (default: 100) |
LOG_LEVEL |
Log verbosity: error, warn, info, debug (default: info; CLI: --log-level) |
MCP_ENABLE_HSTS |
Enable HSTS security header on HTTP responses (CLI: --enable-hsts; default: false) |
OAUTH_ENABLED |
Set to true to enable OAuth 2.1 authentication (HTTP only) |
OAUTH_ISSUER |
OAuth issuer URL (e.g., https://auth.example.com/realms/mcp) |
OAUTH_AUDIENCE |
Expected JWT audience claim |
OAUTH_JWKS_URI |
JWKS endpoint for token signature verification |
OAUTH_CLOCK_TOLERANCE |
Allowed clock skew tolerance in seconds for JWT verification (default: 5) |
CODE_MODE_MAX_RESULT_SIZE |
Maximum size in bytes for mj_execute_code result payload (CLI: --codemode-max-result-size; default: 102400) |
BRIEFING_ENTRY_COUNT |
Journal entries in briefing (CLI: --briefing-entries; default: 3) |
BRIEFING_SUMMARY_COUNT |
Session summaries to list in briefing (CLI: --briefing-summaries; default: 1) |
BRIEFING_INCLUDE_TEAM |
Include team DB entries in briefing (true/false; default: false) |
BRIEFING_ISSUE_COUNT |
Issues to list in briefing; 0 = count only (default: 0) |
BRIEFING_PR_COUNT |
PRs to list in briefing; 0 = count only (default: 0) |
BRIEFING_PR_STATUS |
Show PR status breakdown (open/merged/closed; default: false) |
BRIEFING_MILESTONE_COUNT |
Milestones to list in briefing; 0 = hide entirely (CLI: --briefing-milestones; default: 3) |
BRIEFING_WORKFLOW_COUNT |
Workflow runs to list in briefing; 0 = status only (default: 0) |
BRIEFING_WORKFLOW_STATUS |
Show workflow status breakdown in briefing (default: false) |
BRIEFING_COPILOT_REVIEWS |
Aggregate Copilot review state in briefing (default: false) |
RULES_FILE_PATH |
Path to user rules file for agent awareness (CLI: --rules-file) |
SKILLS_DIR_PATH |
Path to skills directory for agent awareness (CLI: --skills-dir) |
MEMORY_JOURNAL_WORKFLOW_SUMMARY |
Free-text workflow summary for memory://workflows (CLI: --workflow-summary) |
INSTRUCTION_LEVEL |
Briefing depth: essential, standard, full (CLI: --instruction-level; default: standard) |
PROJECT_LINT_CMD |
Project lint command for GitHub Commander validation gates (default: npm run lint) |
PROJECT_TYPECHECK_CMD |
Project typecheck command (default: npm run typecheck; empty = skip) |
PROJECT_BUILD_CMD |
Project build command (default: npm run build; empty = skip) |
PROJECT_TEST_CMD |
Project test command (default: npm run test) |
PROJECT_E2E_CMD |
Project E2E test command (default: empty = skip) |
PROJECT_PACKAGE_MANAGER |
Package manager override: npm, yarn, pnpm, bun (default: auto-detect from lockfile) |
PROJECT_HAS_DOCKERFILE |
Enable Docker audit steps (default: auto-detect) |
COMMANDER_HITL_FILE_THRESHOLD |
Human-in-the-loop checkpoint if changes touch > N files (default: 10) |
COMMANDER_SECURITY_TOOLS |
Override security tool auto-detection (comma-separated; default: auto-detect) |
COMMANDER_BRANCH_PREFIX |
Branch naming prefix for PRs (default: fix) |
AUDIT_LOG_PATH |
Path for the JSONL audit log of write/admin tool calls. Rotates at 10 MB (keeps 5 archives). Omit to disable audit logging. |
AUDIT_REDACT |
Set to true to omit tool arguments from audit log entries for privacy (default: false) |
AUDIT_READS |
Log read-scoped tool calls in addition to write/admin (CLI: --audit-reads; default: false) |
AUDIT_LOG_MAX_SIZE |
Maximum audit log file size in bytes before rotation (CLI: --audit-log-max-size; default: 10485760) |
MCP_METRICS_ENABLED |
Set to false to disable in-memory tool call metrics accumulation (default: true) |
FLAG_VOCABULARY |
Comma-separated flag types for Hush Protocol (CLI: --flag-vocabulary; default: blocker,needs_review,help_requested,fyi) |
Multi-Project Workflows: For agents to seamlessly support multiple projects, provide PROJECT_REGISTRY.
When executing GitHub tools (issues, PRs, context, etc.), the server resolves repository context in this order:
- Dynamic Project Routing: If the agent passes a
repostring that matches a key in yourPROJECT_REGISTRY, the server dynamically mounts the physical directory mapped to that project. It executes git commands locally and automatically infers theowner. - Explicit Override: If the agent provides both
ownerandrepoexplicitly, those values override auto-detection for API calls. - Missing Context: Without
PROJECT_REGISTRYor explicit parameters, the server blocks execution and returns{requiresUserInput: true}to prompt the agent.
When opening an issue or viewing/moving a Kanban card, the server needs a GitHub Project number. It determines this via:
- Exploring the raw
project_numberargument passed by the agent. - Checking if the
repostring precisely matches an entry in yourPROJECT_REGISTRY, seamlessly mapping it to its pre-configuredproject_number. - Falling back to the globally defined
DEFAULT_PROJECT_NUMBERif set.
For production deployments, enable OAuth 2.1 authentication on the HTTP transport:
| Component | Status | Description |
|---|---|---|
| Protected Resource Metadata | β | RFC 9728 /.well-known/oauth-protected-resource |
| Auth Server Discovery | β | RFC 8414 metadata discovery with caching |
| Token Validation | β | JWT validation with JWKS support |
| Scope Enforcement | β | Granular read, write, admin scopes |
| HTTP Transport | β | Streamable HTTP with OAuth middleware |
Supported Scopes:
| Scope | Tool Groups |
|---|---|
read |
core, search, analytics, relationships, io |
write |
github, team (+ all read groups) |
admin |
admin, backup, codemode (+ all write/read groups) |
Quick Start:
memory-journal-mcp --transport http --port 3000 \
--oauth-enabled \
--oauth-issuer https://auth.example.com/realms/mcp \
--oauth-audience memory-journal-mcp \
--oauth-jwks-uri https://auth.example.com/realms/mcp/protocol/openid-connect/certsOr via environment variables:
export OAUTH_ENABLED=true
export OAUTH_ISSUER=https://auth.example.com/realms/mcp
export OAUTH_AUDIENCE=memory-journal-mcp
export OAUTH_CLOCK_TOLERANCE=5
memory-journal-mcp --transport http --port 3000Note: OAuth is opt-in. When not enabled, the server falls back to simple token authentication via
MCP_AUTH_TOKENenvironment variable, or runs without authentication.
- Session start β agent reads
memory://briefing(ormemory://briefing/{repo}) and shows project context - Session summary β use
/session-summaryto capture progress and next-session context - Next session's briefing includes the previous summary β context flows seamlessly
export GITHUB_TOKEN="your_token" # For Projects/Issues/PRsScopes: repo, project, read:org (org-level project discovery only)
Memory Journal provides a hybrid approach to GitHub management:
| Capability Source | Purpose |
|---|---|
| MCP Server | Specialized features: Kanban visualization, Milestones, journal linking, project timelines |
| Agent (gh CLI) | Full GitHub mutations: create/close issues, create/merge PRs, manage releases |
MCP Server Tools (Read + Kanban + Milestones + Issue Lifecycle):
get_github_issues/get_github_issue- Query issuesget_github_prs/get_github_pr- Query pull requestsget_github_context- Full repository contextget_kanban_board/add_kanban_item/move_kanban_item/delete_kanban_item- Kanban managementget_github_milestones/get_github_milestone- Milestone tracking with completion %create_github_milestone/update_github_milestone/delete_github_milestone- Milestone CRUDget_repo_insights- Repository traffic & analytics (stars, clones, views, referrers, popular paths)create_github_issue_with_entry/close_github_issue_with_entry- Issue lifecycle with journal linking
Why this design? The MCP server focuses on value-added features that integrate journal entries with GitHub (Kanban views, Milestones, timeline resources, context linking). Standard GitHub mutations (create/close issues, merge PRs, manage releases) are handled directly by agents via
ghCLI.
Complete GitHub integration guide β
The server natively bundles the github-commander agent skill (accessible via memory://skills/github-commander). This extends your AI assistant with 9 autonomous DevOps workflows for repository stewardship: Issue Triage, Milestone Sprints, PR Reviews, Copilot Audits, Security Audits, Code Quality Audits, Performance Audits, Roadmap Kickoffs, and Dependency Updates. Configure validation layers using the PROJECT_* environment overrides to enforce CI-matching execution locally during agent tasks!
flowchart TB
AI["π€ AI Agent<br/>(Cursor, Windsurf, Claude)"]
subgraph MCP["Memory Journal MCP Server"]
Tools["π οΈ 70 Tools"]
Resources["π‘ 36 Resources"]
Prompts["π¬ 17 Prompts"]
end
subgraph Storage["Persistence Layer"]
SQLite[("πΎ SQLite<br/>Entries, Tags, Relationships")]
Vector[("π Vector Index<br/>Semantic Embeddings")]
Backups["π¦ Backups"]
end
subgraph External["External Integrations"]
GitHub["π GitHub API<br/>Issues, PRs, Actions"]
Kanban["π Projects v2<br/>Kanban Boards"]
end
AI <-->|"MCP Protocol"| MCP
Tools --> Storage
Tools --> External
Resources --> Storage
Resources --> External
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β MCP Server Layer (TypeScript) β
β βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββ β
β β Tools (70) β β Resources (36) β β Prompts (17)β β
β β with Annotationsβ β with Annotationsβ β β β
β βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Native SQLite Engine β
β βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββ β
β β better-sqlite3 β β sqlite-vec β β transformersβ β
β β (High-Perf I/O) β β (Vector Index) β β (Embeddings)β β
β βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β SQLite Database with Hybrid Search β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β β entries + tags + relationships + embeddings + backups ββ
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
- TypeScript + Native SQLite - High-performance
better-sqlite3with synchronous I/O - sqlite-vec - Vector similarity search via SQLite extension
- @huggingface/transformers - ML embeddings in JavaScript
- Lazy loading - ML models load on first use, not startup
Memory Journal is designed for extremely low overhead during AI task execution. We include a vitest bench suite to maintain these baseline guarantees:
- Database Reads: Operations execute in fractions of a millisecond.
calculateImportanceis ~13-14x faster than retrieving 50 recent entries. - Vector Search Engine: Both search (~140-220 ops/sec) and indexing (~1600-1900+ ops/sec) are high-throughput via
sqlite-vecwith SQL-native KNN queries. - Core MCP Routines:
getToolsuses cached O(1) dispatch (~4800-7000x faster thanget_recent_entries).create_entryandsearch_entriesexecute through the full MCP layer with sub-millisecond overhead.
To run the benchmarking suite locally:
npm run benchExtensively tested across two frameworks:
| Suite | Command | Covers |
|---|---|---|
| Vitest (unit/integration) | npm test |
Database, tools, resources, handlers, security, GitHub, vector search, codemode |
| Playwright (e2e) | npm run test:e2e |
HTTP/SSE transport, auth, sessions, CORS, security headers, scheduler |
npm test # Unit + integration tests
npm run test:e2e # End-to-end HTTP/SSE transport tests- Deterministic error handling - Every tool returns structured
{success, error, code, category, suggestion, recoverable}responses with actionable context β no raw exceptions, no silent failures, no misleading messages - Local-first - All data stored locally, no external API calls (except optional GitHub)
- Input validation - Zod schemas, content size limits, SQL injection prevention
- Path traversal protection - Backup filenames validated
- MCP 2025-03-26 annotations - Behavioral hints (
readOnlyHint,destructiveHint, etc.) - HTTP transport hardening - 7 security headers, configurable multi-origin CORS, 1MB body limit, built-in rate limiting (100 req/min), server timeouts, HSTS (opt-in), 30-min session timeout, 404 handler, cross-protocol guard
- Token scrubbing - GitHub tokens and credentials automatically redacted from error logs
- Single SQLite file - You own your data
- Portable - Move your
.dbfile anywhere - Soft delete - Entries can be recovered
- Auto-backup on restore - Never lose data accidentally
- GitHub Wiki - Complete documentation
- Copilot Setup Guide - Cross-agent memory bridge between IDE agents and GitHub Copilot
- Deployment Guide - CI/CD pipeline, environments, and version bump checklist
- Docker Hub - Container images
- npm Package - Node.js distribution
- Issues - Bug reports & feature requests
MIT License - See LICENSE file for details.
Built by developers, for developers. PRs welcome! See CONTRIBUTING.md for guidelines.
Migrating from v2.x? Your existing database is fully compatible. The TypeScript version uses the same schema and data format.