A local-first Agentic AI runtime designed to provide secure agent execution through Just-in-Time (JIT) Micro-VMs.
LuminaGuard replaces the insecure "vibe coding" paradigm with rigorous "Agentic Engineering" - combining the usability of OpenClaw with the security of Nanoclaw.
- Just-in-Time Micro-VMs: Agents run in ephemeral Micro-VMs that are spawned on-demand and destroyed after task completion
- Native MCP Support: Connects to any standard MCP Server (GitHub, Slack, Postgres, etc.)
- The Approval Cliff: High-stakes actions require explicit human approval before execution
- Defense in Depth: Multiple security layers including Rust memory safety, KVM virtualization, seccomp filters, and firewall isolation
- Rust 1.70+
- Python 3.10+
- Firecracker (for VM features)
- Linux with KVM support
# Clone the repository
git clone https://github.com/anchapin/LuminaGuard.git
cd LuminaGuard
# Install all dependencies
make install
# Run tests to verify setup
make testFor developers who want hot-reload development, extensible architecture, and full source code control:
git clone https://github.com/anchapin/luminaguard.git
cd luminaguard
./scripts/install-dev-mode.shSee DEV_MODE_GUIDE.md for full developer setup with hot-reload and debugging tools.
The quickest way to get a LuminaGuard bot running β no Firecracker or KVM required:
# One-shot: send a message and see the reply
python agent/create_bot.py --message "Hello"
# Interactive REPL
python agent/create_bot.py
# Check setup status
python agent/create_bot.py --statusOr from Python:
from bot_factory import create_bot
bot = create_bot()
print(bot.chat("Hello"))
# β "Please setup environment variables for your LLM"
# (set OPENAI_API_KEY or another LLM env var to enable AI responses)Set an LLM provider to enable real AI responses:
# Copy the example env file and add your key(s)
cp .env.example .env
# Then edit .env β set at least one of:
# OPENAI_API_KEY, ANTHROPIC_API_KEY, or OLLAMA_HOST
# Or export directly in your shell:
export OPENAI_API_KEY=sk-β¦ # OpenAI / GPT
export ANTHROPIC_API_KEY=sk-ant-β¦ # Anthropic / Claude
export OLLAMA_HOST=http://localhost:11434 # Local Ollama (free)See .env.example for the full list of configurable variables.
See agent/bot_factory.py for the full API and agent/create_bot.py for CLI options.
from mcp_client import McpClient
# Connect to a filesystem MCP server
with McpClient("filesystem", ["npx", "-y", "@modelcontextprotocol/server-filesystem", "/tmp"]) as client:
tools = client.list_tools()
result = client.call_tool("read_file", {"path": "test.txt"})
print(f"Content: {result}")use luminaguard_orchestrator::mcp::McpClient;
let mut client = McpClient::connect_stdio(
"filesystem",
&["npx", "-y", "@modelcontextprotocol/server-filesystem", "/tmp"]
).await?;
client.initialize().await?;
let tools = client.list_tools().await?;use luminaguard_orchestrator::vm;
let handle = vm::spawn_vm("my-task").await?;
println!("VM {} spawned in {:.2}ms", handle.id, handle.spawn_time_ms);
vm::destroy_vm(handle).await?;LuminaGuard uses a "Rust Wrapper, Python Brain" design:
- Orchestrator (Rust): Handles micro-VM spawning, MCP connections, and security
- Agent (Python): The reasoning loop for agent decision-making
See CLAUDE.md for detailed developer documentation.
LuminaGuard implements a strict approval system:
| Action Type | Description | Approval Required |
|---|---|---|
| Green | Reading files, searching, checking logs | No |
| Red | Editing code, deleting files, sending emails | Yes |
Before any Red action executes, users see a "Diff Card" showing exactly what will change.
make test # Run all tests
make test-rust # Run Rust tests only
make test-python # Run Python tests only
make fmt # Format code
make lint # Run linters| Component | Coverage | Target |
|---|---|---|
| Rust (Orchestrator) | 74.2% | 75.0% |
| Python (Agent) | 78.0% | 75.0% |
See LICENSE file.