Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
388 changes: 388 additions & 0 deletions .cursor/rules/design.mdc

Large diffs are not rendered by default.

30 changes: 29 additions & 1 deletion .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,10 @@ VERCEL_OIDC_TOKEN=auto_generated_by_vercel_env_pull
# E2B_API_KEY=your_e2b_api_key # Get from https://e2b.dev

# =================================================================================
# AI PROVIDERS - Need at least one
# AI PROVIDERS - Need at least one (Cloud OR Local)
# =================================================================================

# Cloud AI Providers
# Vercel AI Gateway (recommended - provides access to multiple models)
AI_GATEWAY_API_KEY=your_ai_gateway_api_key # Get from https://vercel.com/dashboard/ai-gateway/api-keys

Expand All @@ -36,3 +37,30 @@ ANTHROPIC_API_KEY=your_anthropic_api_key # Get from https://console.anthropic.c
OPENAI_API_KEY=your_openai_api_key # Get from https://platform.openai.com (GPT-5)
GEMINI_API_KEY=your_gemini_api_key # Get from https://aistudio.google.com/app/apikey
GROQ_API_KEY=your_groq_api_key # Get from https://console.groq.com (Fast inference - Kimi K2 recommended)

# =================================================================================
# LOCAL AI PROVIDERS (Alternative to Cloud Providers)
# =================================================================================

# Ollama (recommended for local development)
OLLAMA_ENABLED=true
OLLAMA_BASE_URL=http://localhost:11434/v1 # Default Ollama OpenAI API endpoint
# Recommended models: llama3.2:7b, deepseek-coder:6.7b, codellama:7b
# Install: ollama pull llama3.2:7b

# vLLM (for production local inference)
VLLM_ENABLED=false
VLLM_BASE_URL=http://localhost:8000/v1 # Default vLLM OpenAI API endpoint
# Start vLLM: python -m vllm.entrypoints.openai.api_server --model meta-llama/CodeLlama-7b-Instruct-hf

# LM Studio (for desktop GUI)
LMSTUDIO_ENABLED=false
LMSTUDIO_BASE_URL=http://localhost:1234/v1 # Default LM Studio local server
# Start server in LM Studio app, then load a model like gpt-oss-20b or deepseek-coder-v2-lite

# =================================================================================
# LOCAL MODEL REQUIREMENTS
# =================================================================================
# Minimum: 3B parameters, 4GB+ VRAM/RAM
# Recommended: 7B+ parameters, 8GB+ VRAM/RAM for better code generation
# Best performance: 13B+ parameters, 16GB+ VRAM/RAM
Loading