Skip to content

blackaxgit/clx

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

24 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

CLX - Claude Code Extension

CI License: MPL-2.0 Claude Code Ready

Note: Currently supports macOS (ARM64 and Intel) only.

Intelligent command validation and context persistence for Claude Code.

Features

  • Command Validation - Two-layer validation system:

    • Layer 0: Fast deterministic whitelist/blacklist rules (~1ms)
    • Layer 1: LLM-based risk assessment via Ollama (~100-300ms)
  • Context Persistence - SQLite-based storage with semantic search:

    • Automatic snapshots before context compression
    • Vector embeddings for semantic recall
    • Session history and analytics
  • Auto-Recall - Automatic context injection on every prompt:

    • Hybrid search: semantic (sqlite-vec) + FTS5 full-text
    • Relevant past sessions injected as additionalContext
    • Configurable thresholds, timeouts, and result limits
    • Graceful degradation: Ollama down β†’ FTS5 fallback β†’ orchestrator-only
  • User Learning - Adapts to your workflow:

    • Tracks approved/denied commands
    • Auto-generates rules based on usage patterns
  • MCP Tools - Claude can access:

    • clx_recall - Search historical context
    • clx_remember - Explicitly save information
    • clx_checkpoint - Create manual snapshots
    • clx_rules - Manage validation rules

Quick Install

macOS (Homebrew)

brew tap blackaxgit/clx
brew install clx

This installs clx, clx-hook, and clx-mcp. To update:

brew update && brew upgrade clx

Install with Claude Code

Let Claude handle the entire setup. You just need macOS, Ollama, and Rust installed.

1. Make sure Ollama is running:

ollama serve

2. Paste this into Claude Code:

Install CLX from https://github.com/blackaxgit/clx:
1. Clone the repo and build: git clone https://github.com/blackaxgit/clx.git /tmp/clx && cd /tmp/clx && cargo build --release
2. Run the installer: ./target/release/clx install
3. Pull Ollama models: ollama pull qwen3:1.7b && ollama pull qwen3-embedding:0.6b
4. Add to PATH: echo 'export PATH="$HOME/.clx/bin:$PATH"' >> ~/.zshrc
5. Tell me to restart Claude Code when done

3. Restart Claude Code.

Done. Hooks are validating commands, context is being persisted, and MCP tools are available.

One-line install (alternative)

curl -fsSL https://raw.githubusercontent.com/blackaxgit/clx/main/install.sh | bash

Manual Install

Full control over every step. Requires macOS (ARM64), Rust 1.85+, and Ollama.

1. Install prerequisites:

# Rust (if not installed)
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

# Ollama (if not installed) β€” or download from https://ollama.com
brew install ollama
ollama serve   # start the server

2. Build and install CLX:

git clone https://github.com/blackaxgit/clx.git
cd clx
cargo build --release
./target/release/clx install

3. Pull the required Ollama models:

ollama pull qwen3:1.7b
ollama pull qwen3-embedding:0.6b

4. Add CLX to your PATH:

echo 'export PATH="$HOME/.clx/bin:$PATH"' >> ~/.zshrc
source ~/.zshrc

5. Restart Claude Code, then verify:

clx dashboard

You should see the interactive dashboard with session history and system status.

See INSTALL.md for troubleshooting.

Usage

CLI Commands

# Check status
clx dashboard

# Search context
clx recall "authentication bug"

# View/edit configuration
clx config
clx config edit

# Manage rules
clx rules list
clx rules allow "npm install *"
clx rules deny "rm -rf /"

# Generate shell completions (v0.2+)
clx completions bash > ~/.clx-completion.bash
clx completions zsh > ~/.clx-completion.zsh

# Manage embeddings (v0.2+)
clx embeddings status        # Check model and dimensions
clx embeddings rebuild       # Rebuild for model migration

# Uninstall
clx uninstall
clx uninstall --purge  # Also removes ~/.clx

Configuration

Edit ~/.clx/config.yaml:

validator:
  enabled: true
  layer1_enabled: true        # LLM validation
  layer1_timeout_ms: 30000
  default_decision: "ask"     # allow, deny, ask

context:
  enabled: true
  auto_snapshot: true

ollama:
  host: "http://127.0.0.1:11434"
  model: "qwen3:1.7b"
  embedding_model: "qwen3-embedding:0.6b"
  timeout_ms: 60000

user_learning:
  enabled: true
  auto_whitelist_threshold: 3   # Auto-add after N allows
  auto_blacklist_threshold: 2   # Auto-block after N denies

logging:
  level: "info"
  file: "~/.clx/logs/clx.log"

auto_recall:
  enabled: true
  max_results: 3              # Top-K results to inject
  similarity_threshold: 0.35  # Min relevance score (0.0-1.0)
  max_context_chars: 1000     # Max chars for recall context
  timeout_ms: 500             # Recall timeout per prompt
  fallback_to_fts: true       # Use FTS5 if semantic fails
  include_key_facts: true     # Include key facts in context
  min_prompt_len: 10          # Skip recall for short prompts

Custom Rules

Edit ~/.clx/rules/default.yaml:

whitelist:
  - pattern: "Bash(npm:test*)"
    description: "Allow npm test commands"
  - pattern: "Bash(cargo:build*)"
    description: "Allow cargo build"

blacklist:
  - pattern: "Bash(rm:-rf /*)"
    description: "Block recursive delete from root"
  - pattern: "Bash(curl:*|bash)"
    description: "Block pipe to shell"

Custom LLM Prompt

Edit ~/.clx/prompts/validator.txt to customize risk assessment.

How It Works

Command Validation Flow

Claude requests command
        ↓
PreToolUse hook fires
        ↓
Layer 0: Check whitelist/blacklist
    β”œβ”€ Match whitelist β†’ Allow
    β”œβ”€ Match blacklist β†’ Deny
    └─ Unknown β†’ Continue
        ↓
Layer 1: Ollama risk assessment
    β”œβ”€ Score 1-3 β†’ Allow
    β”œβ”€ Score 4-7 β†’ Ask user
    └─ Score 8-10 β†’ Deny
        ↓
User confirms (if Ask)
        ↓
Command executes
        ↓
PostToolUse logs result

Context Persistence Flow

PreCompact hook fires (before compression)
        ↓
Read transcript from JSONL file
        ↓
Generate summary via Ollama
        ↓
Store snapshot in SQLite
        ↓
Generate embedding for search
        ↓
Context available via clx_recall

Project Structure

clx/
β”œβ”€β”€ crates/
β”‚   β”œβ”€β”€ clx-core/       # Core library
β”‚   β”‚   └── src/
β”‚   β”‚       β”œβ”€β”€ config.rs      # Configuration management
β”‚   β”‚       β”œβ”€β”€ storage/       # SQLite storage (sessions, snapshots, rules)
β”‚   β”‚       β”œβ”€β”€ policy/        # Command validation (L0 rules + L1 LLM)
β”‚   β”‚       β”œβ”€β”€ recall.rs      # Hybrid search engine (semantic + FTS5)
β”‚   β”‚       β”œβ”€β”€ ollama.rs      # Ollama client
β”‚   β”‚       └── embeddings.rs  # Vector search
β”‚   β”œβ”€β”€ clx-hook/       # Hook handler binary
β”‚   β”œβ”€β”€ clx-mcp/        # MCP server binary
β”‚   └── clx/            # CLI binary + dashboard
β”œβ”€β”€ scripts/            # Docker compose, service management, packaging
β”œβ”€β”€ install.sh          # Build-from-source installer
β”œβ”€β”€ INSTALL.md          # Installation guide
└── CONTRIBUTING.md     # Contribution guide

Development

# Build
cargo build

# Test
cargo test

# Run with verbose logging
RUST_LOG=debug ./target/debug/clx dashboard

Contributing

See CONTRIBUTING.md for development setup and guidelines.

License

MPL-2.0

About

πŸ›‘οΈ Rust CLI extension for Claude Code β€” LLM-powered command validation, SQLite context persistence & MCP tools. Two-layer safety (rules + Ollama), vector semantic search, and user-adaptive learning.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors