Note: Currently supports macOS (ARM64 and Intel) only.
Intelligent command validation and context persistence for Claude Code.
-
Command Validation - Two-layer validation system:
- Layer 0: Fast deterministic whitelist/blacklist rules (~1ms)
- Layer 1: LLM-based risk assessment via Ollama (~100-300ms)
-
Context Persistence - SQLite-based storage with semantic search:
- Automatic snapshots before context compression
- Vector embeddings for semantic recall
- Session history and analytics
-
Auto-Recall - Automatic context injection on every prompt:
- Hybrid search: semantic (sqlite-vec) + FTS5 full-text
- Relevant past sessions injected as
additionalContext - Configurable thresholds, timeouts, and result limits
- Graceful degradation: Ollama down β FTS5 fallback β orchestrator-only
-
User Learning - Adapts to your workflow:
- Tracks approved/denied commands
- Auto-generates rules based on usage patterns
-
MCP Tools - Claude can access:
clx_recall- Search historical contextclx_remember- Explicitly save informationclx_checkpoint- Create manual snapshotsclx_rules- Manage validation rules
brew tap blackaxgit/clx
brew install clxThis installs clx, clx-hook, and clx-mcp. To update:
brew update && brew upgrade clxLet Claude handle the entire setup. You just need macOS, Ollama, and Rust installed.
1. Make sure Ollama is running:
ollama serve2. Paste this into Claude Code:
Install CLX from https://github.com/blackaxgit/clx:
1. Clone the repo and build: git clone https://github.com/blackaxgit/clx.git /tmp/clx && cd /tmp/clx && cargo build --release
2. Run the installer: ./target/release/clx install
3. Pull Ollama models: ollama pull qwen3:1.7b && ollama pull qwen3-embedding:0.6b
4. Add to PATH: echo 'export PATH="$HOME/.clx/bin:$PATH"' >> ~/.zshrc
5. Tell me to restart Claude Code when done
3. Restart Claude Code.
Done. Hooks are validating commands, context is being persisted, and MCP tools are available.
curl -fsSL https://raw.githubusercontent.com/blackaxgit/clx/main/install.sh | bashFull control over every step. Requires macOS (ARM64), Rust 1.85+, and Ollama.
1. Install prerequisites:
# Rust (if not installed)
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# Ollama (if not installed) β or download from https://ollama.com
brew install ollama
ollama serve # start the server2. Build and install CLX:
git clone https://github.com/blackaxgit/clx.git
cd clx
cargo build --release
./target/release/clx install3. Pull the required Ollama models:
ollama pull qwen3:1.7b
ollama pull qwen3-embedding:0.6b4. Add CLX to your PATH:
echo 'export PATH="$HOME/.clx/bin:$PATH"' >> ~/.zshrc
source ~/.zshrc5. Restart Claude Code, then verify:
clx dashboardYou should see the interactive dashboard with session history and system status.
See INSTALL.md for troubleshooting.
# Check status
clx dashboard
# Search context
clx recall "authentication bug"
# View/edit configuration
clx config
clx config edit
# Manage rules
clx rules list
clx rules allow "npm install *"
clx rules deny "rm -rf /"
# Generate shell completions (v0.2+)
clx completions bash > ~/.clx-completion.bash
clx completions zsh > ~/.clx-completion.zsh
# Manage embeddings (v0.2+)
clx embeddings status # Check model and dimensions
clx embeddings rebuild # Rebuild for model migration
# Uninstall
clx uninstall
clx uninstall --purge # Also removes ~/.clxEdit ~/.clx/config.yaml:
validator:
enabled: true
layer1_enabled: true # LLM validation
layer1_timeout_ms: 30000
default_decision: "ask" # allow, deny, ask
context:
enabled: true
auto_snapshot: true
ollama:
host: "http://127.0.0.1:11434"
model: "qwen3:1.7b"
embedding_model: "qwen3-embedding:0.6b"
timeout_ms: 60000
user_learning:
enabled: true
auto_whitelist_threshold: 3 # Auto-add after N allows
auto_blacklist_threshold: 2 # Auto-block after N denies
logging:
level: "info"
file: "~/.clx/logs/clx.log"
auto_recall:
enabled: true
max_results: 3 # Top-K results to inject
similarity_threshold: 0.35 # Min relevance score (0.0-1.0)
max_context_chars: 1000 # Max chars for recall context
timeout_ms: 500 # Recall timeout per prompt
fallback_to_fts: true # Use FTS5 if semantic fails
include_key_facts: true # Include key facts in context
min_prompt_len: 10 # Skip recall for short promptsEdit ~/.clx/rules/default.yaml:
whitelist:
- pattern: "Bash(npm:test*)"
description: "Allow npm test commands"
- pattern: "Bash(cargo:build*)"
description: "Allow cargo build"
blacklist:
- pattern: "Bash(rm:-rf /*)"
description: "Block recursive delete from root"
- pattern: "Bash(curl:*|bash)"
description: "Block pipe to shell"Edit ~/.clx/prompts/validator.txt to customize risk assessment.
Claude requests command
β
PreToolUse hook fires
β
Layer 0: Check whitelist/blacklist
ββ Match whitelist β Allow
ββ Match blacklist β Deny
ββ Unknown β Continue
β
Layer 1: Ollama risk assessment
ββ Score 1-3 β Allow
ββ Score 4-7 β Ask user
ββ Score 8-10 β Deny
β
User confirms (if Ask)
β
Command executes
β
PostToolUse logs result
PreCompact hook fires (before compression)
β
Read transcript from JSONL file
β
Generate summary via Ollama
β
Store snapshot in SQLite
β
Generate embedding for search
β
Context available via clx_recall
clx/
βββ crates/
β βββ clx-core/ # Core library
β β βββ src/
β β βββ config.rs # Configuration management
β β βββ storage/ # SQLite storage (sessions, snapshots, rules)
β β βββ policy/ # Command validation (L0 rules + L1 LLM)
β β βββ recall.rs # Hybrid search engine (semantic + FTS5)
β β βββ ollama.rs # Ollama client
β β βββ embeddings.rs # Vector search
β βββ clx-hook/ # Hook handler binary
β βββ clx-mcp/ # MCP server binary
β βββ clx/ # CLI binary + dashboard
βββ scripts/ # Docker compose, service management, packaging
βββ install.sh # Build-from-source installer
βββ INSTALL.md # Installation guide
βββ CONTRIBUTING.md # Contribution guide
# Build
cargo build
# Test
cargo test
# Run with verbose logging
RUST_LOG=debug ./target/debug/clx dashboardSee CONTRIBUTING.md for development setup and guidelines.
MPL-2.0