HivePilot is an AI command center for multi-repo workflows. It dispatches Claude Code, LangChain, LangGraph, CrewAI, shell runners, Codex/Gemini/OpenCode/Ollama CLIs, and Git/GitHub automation from a single YAML-driven orchestrator. This release adds an interactive TUI, concurrency, structured logging + run history, state persistence, and optional API fallbacks for every runner.
- Interactive mode β
hivepilot interactive(Questionary) lets you choose projects/tasks/pipelines on the fly. - Parallel execution β ThreadPool-backed scheduling spreads a task/pipeline across many repositories (
--concurrencyor.env). - YAML-first runners β Define Claude/shell/LangChain/internal/Codex/Gemini/OpenCode/Ollama/OpenRouter runners once and reference them everywhere.
- CLI β API switch β Every CLI runner can flip to API mode (OpenAI, Anthropic, Google Gemini, Mistral, Perplexity, OpenRouter) per YAML, CLI being the default fallback.
- Pipelines + multi-step API/CLI workflows β Chain tasks with mixed engines (API pre-check β CLI codemod β shell validation).
- Structured logging & state store β
runs/<timestamp>/summary.json+ JSON logs + SQLitestate.dbcapture every run for later inspection, TUI dashboards, and scheduling. - Git/GitHub automation β Built-in services handle branch/commit/push,
gh repo/issue/release, and YAML tasks (gh-*) for declarative automation. - Rich extras β LangGraph, LangChain, CrewAI, Textual dashboard, scheduler, and profile-driven Claude model selection (Sonnet/Opus/Haiku).
- Discovery + remote API β
hivepilot discoverscans local/GitHub repos, andhivepilot api serveexposes FastAPI endpoints for remote triggers/ChatOps. - Policies & notifications β
policies.yamldefines per-project rules (auto-git/approvals) and Slack/Discord/Telegram webhooks notify on start/completion/failure. - Secrets & knowledge-aware prompts β Steps reference
secrets:blocks (env/SOPS/etc.) andknowledge_files:to inject repo context via LangChain/FAISS embeddings. - RBAC + tokens β CLI/API commands enforce
read,run,approve, andadminroles. Manage tokens viahivepilot tokens β¦or add them toapi_tokens.yaml; supply tokens via--tokenorHIVEPILOT_API_TOKEN. - ChatOps β Slack slash commands (and optional Telegram bot) can list/approve runs.
POST /chatops/slackaccepts Slack payloads; useHIVEPILOT_CHATOPS_TOKENto authorize ChatOps flows.
hivepilot/
βββ hivepilot/
β βββ cli.py # Typer CLI + interactive mode + gh subcommands
β βββ config.py # Pydantic Settings (.env)
β βββ orchestrator.py # Scheduler, concurrency, pipelines
β βββ registry.py # Maps runner names to implementations
β βββ models.py # Pydantic schemas for projects/tasks/pipelines
β βββ pipelines.py # Pipeline helpers
β βββ runners/ # Claude, shell, LangChain, Codex/Gemini/OpenCode/Ollama/OpenRouter
β βββ services/ # git_service, github_service, project_service, pipeline_service
β βββ utils/ # io (runs/summary), logging (structlog), shell helpers
βββ prompts/
βββ projects.yaml
βββ tasks.yaml
βββ pipelines.yaml
βββ model_profiles.yaml # Claude profile map (coding/architecture/automation)
βββ .env.example
βββ requirements.txt
βββ README.md
Everything is configured via YAML (projects, tasks, pipelines, model_profiles). .env only tweaks global paths/commands.
projects:
example-api:
path: ~/dev/example-api
description: Example backend service
claude_md: CLAUDE.md
default_branch: main
owner_repo: your-user/example-api
env:
PYTHONUNBUFFERED: "1"runners:
claude-docs:
kind: claude
command: claude
options:
profile: automation # maps to model_profiles.yaml
validation-suite:
kind: shell
command: |
if [ -f package.json ]; then npm test || true; fi
if [ -f pyproject.toml ]; then pytest || true; fi
codex-default:
kind: codex
command: codex
options:
mode: cli # default, switch to api when needed
api_provider: openai
api_model: gpt-4o
container-validation:
kind: container
command: |
pip install -r requirements.txt && pytest
options:
image: python:3.11
volumes:
- ${PWD}:/workspace
- /tmp/cache:/workspace/.cache
tasks:
docs:
description: Rewrite documentation
steps:
- name: rewrite docs
runner: claude-docs
prompt_file: prompts/docs_rewrite.md
metadata:
claude_profile: automation
knowledge_files: ["README.md", "docs/architecture.md"]
secrets:
OPENAI_API_KEY:
source: env
key: OPENAI_API_KEY
artifacts:
capture: ["diff"]
exporters:
- target: local
- target: s3
bucket: hivepilot-artifacts
prefix: docs-runs
git:
commit: true
push: true
create_pr: true
codex-audit:
description: Architecture scan via Codex (CLI/API)
steps:
- name: codex review
runner: codex-default
prompt_file: prompts/architecture_review.md
refactor:
description: Refactor the codebase with a light validation pass.
steps:
- name: refactor
runner: claude
runner_ref: claude-refactor
prompt_file: refactor.md
timeout_seconds: 5400
- name: validation
runner: container
runner_ref: container-validation
allow_failure: true
timeout_seconds: 1800
gh-repo-init-task:
description: Provision the GitHub repo through internal runners
steps:
- name: repo init
runner: shell
command: hivepilot gh repo-init {project_name} --set-remote --pushSecrets declared per-step (env or file sources) are resolved right before the runner executes and injected into the CLI/API/container environment, so commands get tokens such as OPENAI_API_KEY without committing them directly to YAML.
Shell/CLI commands accept {variables}: project_name, project_path, project_default_branch, project_owner_repo, task_name, step_name, extra_prompt. Escape braces via {{ / }}.
claude_profiles:
coding:
model: sonnet # best for coding
architecture:
model: opus # deep reasoning / architecture
automation:
model: haiku # fast automationsReference profiles via metadata.claude_profile or runner options.profile. Add your own (e.g., review, summary).
Set options.mode: api (or metadata.mode) to call APIs instead of CLIs. Supported api_provider values:
openai,anthropic,google,mistral,perplexity,openrouter.
Required env vars:
| Provider | Env var |
|---|---|
| OpenAI | OPENAI_API_KEY |
| Anthropic | ANTHROPIC_API_KEY |
| Google Gemini | GOOGLE_API_KEY |
| Mistral AI | MISTRAL_API_KEY |
| Perplexity | PERPLEXITY_API_KEY |
| OpenRouter | OPENROUTER_API_KEY |
CLI remains the default fallback; switching back is as simple as removing mode: api.
pipelines:
pentest-fix-review:
description: Pentest β refactor β docs
stages:
- name: pentest
task: pentest
- name: refactor follow-up
task: refactor
- name: docs summary
task: docs
gh-repo-init:
description: Ensure GitHub repo exists & push default branch
stages:
- name: initialize repo
task: gh-repo-init-taskpolicies:
default:
allow_auto_git: true
require_approval: false
allow_containers: true
projects:
example-api:
allow_auto_git: false
require_approval: true
allow_containers: falsePolicies are evaluated before every run. If allow_auto_git is false, --auto-git is blocked for that project. Extend via plugins to enforce approvals or multi-factor flows.
- Runs on projects with
require_approval: trueare queued until approved. - Review pending runs via
hivepilot approvals listor theGET /approvalsAPI. - Approve/deny via CLI (
hivepilot approvals approve <run_id>), API (POST /approvals/{run_id}), or respond via Slack/Discord/Telegram if you wire those webhooks to the API endpoint. - Notifications include the run ID so approvers know what to act on. Once approved, the orchestrator resumes the run with the same run ID and updates
state.db.
schedules:
docs-weekly:
task: docs
projects: ["example-api"]
interval_minutes: 10080
enabled: trueUse hivepilot schedule list to inspect schedules and hivepilot schedule run to execute those whose interval has elapsed. Schedule timestamps are tracked in state.db.
Set any combination of:
SLACK_WEBHOOK_URLDISCORD_WEBHOOK_URLTELEGRAM_BOT_TOKEN+TELEGRAM_CHAT_ID
When present, HivePilot sends start/completion/failure notifications automatically. For richer flows (approvals, custom alerts), add plugins under plugins/ or point HIVEPILOT_PLUGINS_ENTRY to your module.
tokens:
- token: 0123abcd...
role: admin
note: "local admin"
- token: deadbeef...
role: run
note: "CI pipeline"Manage tokens with hivepilot tokens add/list/remove (admin role required). Roles map to permissions:
| Role | Permissions |
|---|---|
read |
list projects/tasks/schedules/approvals |
run |
trigger tasks/pipelines, schedule runs |
approve |
approve/deny queued runs |
admin |
manage tokens, policies, API server |
CLI commands require --token <value> (or set HIVEPILOT_API_TOKEN). API requests must include Authorization: Bearer <token>. Tokens are stored in api_tokens.yaml and synced to state.db for quick lookup.
- Set
HIVEPILOT_CHATOPS_TOKENto a token withrun/approvepermissions. - Slack: configure slash commands to hit
POST /chatops/slack./hivepilot-run <project> <task>/hivepilot-approvals/hivepilot-approve <run_id>//hivepilot-deny <run_id>
- Discord: send messages such as
!hp run <project> <task>or!hp approvalsto the endpoint bound toPOST /chatops/discord. - Telegram: point your bot webhook to
POST /chatops/telegramand use/hp_run,/hp_approvals,/hp_approve,/hp_deny.
python -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
pip install -e .
cp .env.example .envOptional extras:
pip install -e .[langgraph]
pip install -e .[crewai]
pip install -e .[full] # langgraph + crewai + textual + langchain extrasDocker:
docker compose build
docker compose run --rm hivepilot hivepilot doctorhivepilot lint
hivepilot doctor
hivepilot list-projects
hivepilot list-tasks
hivepilot list-pipelines
hivepilot run example-api docs
hivepilot run example-api docs --project example-site --concurrency 2
hivepilot run example-api pentest --all --auto-git
hivepilot run example-api gh-issue-from-extra --extra-prompt "Docs refresh"
hivepilot run example-api docs --extra-prompt "Focus on auth" # uses knowledge-aware prompts
hivepilot run example-api codex-audit
hivepilot run example-api gemini-brief
hivepilot run example-api opencode-fix
hivepilot run example-api ollama-scan
hivepilot run-pipeline example-api pentest-fix-review
hivepilot interactive
hivepilot dashboard # Textual run history
hivepilot approvals list # show pending approvals
hivepilot approvals approve 42 --approver alice
hivepilot tokens add --role run --note "CI worker"
hivepilot tokens list
hivepilot tokens remove 0123abcd...
# GitHub helpers
hivepilot gh repo-init example-api --push
hivepilot gh issue example-api "Docs refresh" --body "RegΓ©nΓ©rer README"
hivepilot gh release example-api v0.2.0 --title "Docs refresh"
# API / scheduler / plugins
hivepilot api serve --host 0.0.0.0 --port 8045
curl -X POST http://localhost:8045/run \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <your-token>" \
-d '{"task":"docs","projects":["example-api"],"extra_prompt":"Focus on auth"}'
# (Upcoming scheduler commands hook into the same API/state store.)
# Discovery helpers
hivepilot discover --root ~/dev --max-depth 2
hivepilot discover --github-org your-org
# Scheduler helpers
hivepilot schedule list
hivepilot schedule run
# All commands accept --token / HIVEPILOT_API_TOKEN to enforce RBAC
# Metrics endpoint
curl http://localhost:8045/metrics- Each run writes
runs/<timestamp>/summary.json. - Structlog JSON logs land in
runs/logs/hivepilot.log. .envHIVEPILOT_OUTPUT_FORMATcan switch summary format (json/plain).- SQLite
state.dbrecords run metadata for dashboards, exports, schedulers. hivepilot dashboard(requiresHIVEPILOT_ENABLE_TEXTUAL_UI=true) opens a Textual UI to browse history, details, and active runs.
- Native β Claude/shell workflows via the runner registry.
- LangGraph β reference
graph: module:functionto compile/invoke graphs. - CrewAI β tasks/pipelines can point to a
build_crewbuilder inworkflows/. - LangChain β runner loads an
LLMChain. - CLI/API hybrids β Codex, Gemini, OpenCode, Ollama, OpenRouter runners flip between CLI (fast, offline) and API (hosted) per YAML.
- Multi-step workflows β mix API and CLI steps in pipelines (e.g., API analysis, CLI codemod, shell validation).
- Plugins hooks β drop Python files in
plugins/(or setHIVEPILOT_PLUGINS_ENTRY=module:function) to register hooks likebefore_step/after_step, enabling Slack notifications, approvals, vulnerability scanners, etc.- Example
plugins/sample.pylogs every step; use it as a starting point for Slack/email approvals or external scanners.
- Example
- Artifacts β after each run,
runs/<timestamp>/artifactscontainsresults.json, Git patches (if enabled), etc. Configure exporters (local/S3) viatask.artifactsto ship results automatically. - Container runner β run steps inside Docker images by setting
kind: containerwith image/command options; policies can block/allow container use per project.
Add new runners by dropping a Python class in hivepilot/runners/ and registering it in the RUNNER_MAP.
git_service.pyhandles checkout, add/commit, push, auto-git enforcement.github_service.pywrapsgh repo/issue/release(with retries, templated URLs).- Use YAML
gh-*tasks or CLIhivepilot gh repo-init|issue|releaseto manage repos, issues, releases. hivepilot gh repo-initnow accepts--set-remote/--no-set-remote,--remote-protocol, and--visibilityso you can pick SSH vs HTTPS remotes and control how repos are created.hivepilot gh releaseexposes--notes-fileplus--generate-notes/--no-generate-notes, making it easy to publish either handcrafted or auto-generated release notes from automation.
hivepilot tokens add --role admin --note "local admin"β copy the token andexport HIVEPILOT_API_TOKEN=<token>hivepilot linthivepilot doctorhivepilot run example-api docs --dry-runhivepilot run example-api gh-repo-init-taskhivepilot run example-api gh-issue-from-extra --extra-prompt "Docs refresh"hivepilot run-pipeline example-api pentest-fix-review --concurrency 2hivepilot dashboard(afterexport HIVEPILOT_ENABLE_TEXTUAL_UI=true)hivepilot run example-api codex-audit --extra-prompt "Security scan"hivepilot run example-api docs-langgraph --auto-githivepilot api serve+curl http://localhost:8045/run ...(withAuthorization: Bearer <token>)hivepilot schedule list/hivepilot schedule runexport SLACK_WEBHOOK_URL=...(or Discord/Telegram) and re-run to confirm notificationscurl http://localhost:8045/metricshivepilot run example-api docs --extra-prompt "Focus on auth"to see knowledge-aware prompts/secrets
Each run should create a folder under runs/<timestamp> containing summaries, logs, and (optionally) artifacts or state references.