Skip to content

Latest commit

 

History

History
945 lines (761 loc) · 27.7 KB

File metadata and controls

945 lines (761 loc) · 27.7 KB

Centralized AI Rules Architecture

Overview

This repository implements a progressive disclosure system for AI development rules that dynamically loads only relevant guidelines based on project context.

Core Concepts

1. Progressive Disclosure: Two-Tier Architecture

Problem: Loading all rules overwhelms AI assistants and creates instruction saturation.

Solution: Two-tier progressive disclosure system that balances minimal overhead with contextual depth.

Tier 1: Bash Hook (Metadata Reminder)

Provides immediate, lightweight behavioral prompts on every request.

Purpose: Quick reminder of which coding standards might apply Mechanism: UserPromptSubmit hook (.claude/hooks/activate-rules.sh) Timing: Runs immediately when user sends prompt Content: Metadata only - shows category names, no actual rule content Output: Visual feedback to user + context injection to Claude Token Cost: ~500 tokens (fixed overhead)

Example:

User prompt: "Write pytest tests"
       ↓
Hook detects: Python project (pyproject.toml) + "test" keyword
       ↓
Hook outputs:
   📋 Matched Rule Categories:
     ☐ base/testing-philosophy
     ☐ languages/python/testing
   [Generic reminders about testing, code quality, etc.]
       ↓
Claude receives: 500 token behavioral prompt

Outcome: Consistent behavioral nudging with minimal context cost

Tier 2: TypeScript Skill (Progressive Content Loading)

Fetches and injects actual rule content based on relevance scoring.

Purpose: Provide specific implementation guidance Mechanism: beforeResponse hook (skill/src/hooks/before-response.ts) Timing: Runs after user prompt, before Claude responds Content: Full markdown rule files from GitHub Selection: Relevance-scored with token budgets Token Cost: 0-5,000 tokens (configurable via maxTokens)

How it works:

  1. Context Detection (20-50ms): Scans filesystem for language/framework markers
  2. Intent Analysis (5-10ms): Extracts keywords and topics from user prompt
  3. Relevance Scoring (5-10ms): Scores each rule file:
    • Language match: +100 points
    • Framework match: +100 points
    • Topic match: +80 points
    • Base rules: +20 points
  4. Selection (1ms): Picks top 5 rules within 5000 token budget
  5. Fetching (100-500ms): Retrieves from GitHub (cached for 1 hour)
  6. Injection (1ms): Adds to Claude's system prompt

Example:

User prompt: "Write FastAPI endpoint with auth"
       ↓
Context: Python + FastAPI project
Intent: implementation + authentication
       ↓
Scored rules:
  - frameworks/fastapi/best-practices.md: 200 pts
  - languages/python/coding-standards.md: 120 pts
  - base/security-principles.md: 100 pts
  - base/testing-philosophy.md: 100 pts
       ↓
Selected (top 4, ~3800 tokens):
  ✓ frameworks/fastapi/best-practices.md (~1500 tokens)
  ✓ languages/python/coding-standards.md (~1100 tokens)
  ✓ base/security-principles.md (~800 tokens)
  ✓ base/testing-philosophy.md (~400 tokens)
       ↓
Claude receives: Hook metadata (500) + Rule content (3800) = 4300 tokens

Outcome: Context-aware guidance without saturation

Combined Token Efficiency

Per-Request Costs:

  • Hook only: ~500 tokens (always runs)
  • Hook + Skill: ~500-5,500 tokens (depends on relevance matches)
  • Maximum: 5,500 tokens = 2.75% of 200K context window
  • Compared to loading all rules: ~200,000 tokens (catastrophic)

Efficiency Gains:

  • 96-97% reduction vs. loading entire repository
  • Smart selection: only most relevant 3-5 rules load
  • Caching: No repeated GitHub fetches within 1 hour
  • Fail-safe: Errors never block Claude's response

2. MECE Framework: Four-Dimensional Organization

Rules are organized using the MECE principle (Mutually Exclusive, Collectively Exhaustive) across four dimensions:

MECE Principles Applied

Mutually Exclusive:

  • No duplication across dimensions
  • Base rules are language/framework/cloud-agnostic
  • Language rules reference base rules instead of duplicating
  • Framework rules build on language rules
  • Cloud rules are provider-specific

Collectively Exhaustive:

  • Complete coverage of common development scenarios
  • All practices map to one or more rule files
  • Clear escalation path: base → language → framework → cloud

Dimension 1: Base (Universal Rules)

Language-agnostic, framework-agnostic, always applicable:

Core Workflow:

  • git-workflow.md
  • code-quality.md
  • development-workflow.md

Testing & Quality:

  • testing-philosophy.md
  • testing-atdd.md
  • refactoring-patterns.md

Architecture & Design:

  • architecture-principles.md
  • 12-factor-app.md
  • specification-driven-development.md

Security & Operations:

  • security-principles.md
  • cicd-comprehensive.md
  • configuration-management.md
  • metrics-standards.md
  • operations-automation.md

AI Development:

  • ai-assisted-development.md
  • ai-ethics-governance.md
  • ai-model-lifecycle.md
  • knowledge-management.md
  • parallel-development.md

Advanced Practices:

  • chaos-engineering.md
  • lean-development.md
  • tool-design.md
  • project-maturity-levels.md

Dimension 2: Language (Language-Specific Rules)

Loaded when language is detected:

  • Python: coding-standards.md, testing.md
  • TypeScript/JavaScript: coding-standards.md, testing.md
  • Go: coding-standards.md, testing.md
  • Java: coding-standards.md, testing.md
  • C#: coding-standards.md, testing.md
  • Rust: coding-standards.md, testing.md
  • Ruby: (extensible)

Dimension 3: Framework (Framework-Specific Rules)

Loaded when framework is detected:

  • React: best-practices.md
  • Django: best-practices.md
  • FastAPI: best-practices.md
  • Express: best-practices.md
  • Spring Boot: best-practices.md
  • Next.js: (extensible)
  • Vue: (extensible)

Dimension 4: Cloud (Cloud Provider Rules)

Loaded when cloud provider is detected:

Vercel:

  • deployment-best-practices.md
  • environment-configuration.md
  • security-practices.md
  • performance-optimization.md
  • reliability-observability.md
  • cost-optimization.md

AWS, Azure, GCP: (extensible following same pattern)

Supporting Documentation

Practice Cross-Reference (PRACTICE_CROSSREFERENCE.md):

  • Bidirectional mapping: practices ↔ files
  • Quick lookup for AI assistants and developers
  • Usage patterns and examples

Anti-Patterns (ANTI_PATTERNS.md):

  • Common mistakes and code smells
  • Detection strategies and automated tools
  • Prevention techniques with examples
  • Categories: code quality, architecture, security, testing, AI development, DevOps

Implementation Guide (IMPLEMENTATION_GUIDE.md):

  • Phased 8-week rollout plan
  • Progressive adoption by maturity level
  • Phase 1-4 with specific tasks and success criteria
  • Customization guidance for different project types

Success Metrics (SUCCESS_METRICS.md):

  • Measurable KPIs for all practices
  • DORA metrics (deployment frequency, lead time, MTTR, change failure rate)
  • Code quality, security, performance, and team productivity metrics
  • Target thresholds by maturity level

MECE Validation (scripts/validate-mece.sh):

  • Automated compliance checking
  • Dimension separation validation
  • Coverage completeness verification
  • Documentation and structure checks

3. Detection-Based Loading

The sync script auto-detects project configuration and maturity level:

# Language/Framework detection
if exists("pyproject.toml") → Load Python rules
if exists("package.json") → Load JS/TS rules
if contains("django") → Load Django rules
if contains("react") → Load React rules

# Cloud provider detection
if exists("vercel.json") → Load Vercel rules
if exists(".aws-sam") → Load AWS rules

# Maturity level detection
if (CI/CD + monitoring + security scanning) → Production
elif (tests + CI/CD + linting) → Pre-Production
else → MVP/POC

Progressive Rigor:

Detected maturity level determines which practices are:

  • Required (must implement)
  • Recommended (should implement when feasible)
  • Optional (can skip or defer)

Example:

Practice: Type checking (TypeScript strict mode)
- MVP/POC: Optional
- Pre-Production: Recommended
- Production: Required

See base/project-maturity-levels.md and maturity indicators in each base rule file.

Directory Structure

centralized-rules/
│
├── base/                          # Universal rules (23 files)
│   ├── git-workflow.md
│   ├── code-quality.md
│   ├── testing-philosophy.md
│   ├── security-principles.md
│   ├── architecture-principles.md
│   ├── cicd-comprehensive.md
│   ├── project-maturity-levels.md
│   ├── ai-assisted-development.md
│   ├── chaos-engineering.md
│   └── ... (14 more)
│
├── languages/                     # Language-specific rules
│   ├── python/
│   │   ├── coding-standards.md
│   │   └── testing.md
│   ├── typescript/
│   │   ├── coding-standards.md
│   │   └── testing.md
│   ├── go/
│   ├── java/
│   ├── csharp/
│   ├── rust/
│   └── ruby/
│
├── frameworks/                    # Framework-specific rules
│   ├── react/best-practices.md
│   ├── django/best-practices.md
│   ├── fastapi/best-practices.md
│   ├── express/best-practices.md
│   ├── springboot/best-practices.md
│   ├── nextjs/
│   └── vue/
│
├── cloud/                         # Cloud provider rules (NEW)
│   ├── vercel/
│   │   ├── deployment-best-practices.md
│   │   ├── environment-configuration.md
│   │   ├── security-practices.md
│   │   ├── performance-optimization.md
│   │   ├── reliability-observability.md
│   │   └── cost-optimization.md
│   ├── aws/                      # (extensible)
│   ├── azure/                    # (extensible)
│   └── gcp/                      # (extensible)
│
├── scripts/                       # Automation scripts (NEW)
│   └── validate-mece.sh          # MECE compliance checker
│
├── tools/                         # Tool-specific templates
│   ├── claude/
│   ├── cursor/
│   └── copilot/
│
├── examples/                      # Usage examples
│   ├── sync-config.json
│   └── USAGE_EXAMPLES.md
│
├── sync-ai-rules.sh              # Main sync script (updated)
├── README.md                      # Main documentation
├── ARCHITECTURE.md               # This file (updated)
├── PRACTICE_CROSSREFERENCE.md    # Practice-to-file mapping (NEW)
├── ANTI_PATTERNS.md              # Common anti-patterns (NEW)
├── IMPLEMENTATION_GUIDE.md       # 8-week rollout plan (NEW)
└── SUCCESS_METRICS.md            # Measurable KPIs (NEW)

Data Flow

Phase 1: Project-Level Disclosure (Setup)

┌─────────────────┐
│ Project Files   │
│ (pyproject.toml,│
│  package.json,  │
│  go.mod, etc.)  │
└────────┬────────┘
         │
         v
┌─────────────────┐
│ Detection Logic │
│ (sync-ai-rules) │
└────────┬────────┘
         │
         v
┌─────────────────┐      ┌──────────────┐
│ Base Rules      │◄─────┤ Always Load  │
└─────────────────┘      └──────────────┘
         │
         v
┌─────────────────┐      ┌──────────────┐
│ Language Rules  │◄─────┤ If Detected  │
└─────────────────┘      └──────────────┘
         │
         v
┌─────────────────┐      ┌──────────────┐
│ Framework Rules │◄─────┤ If Detected  │
└─────────────────┘      └──────────────┘
         │
         v
┌─────────────────┐
│ Tool Generator  │
│ (Hierarchical   │
│  or Monolithic) │
└────────┬────────┘
         │
         v
┌─────────────────┐
│ Generated Files │
│ .claude/AGENTS  │
│ .claude/rules/  │
│ .cursorrules    │
└─────────────────┘

Phase 2: Task-Level Disclosure (Runtime)

┌─────────────────┐
│ User Question   │
│ "Write tests"   │
└────────┬────────┘
         │
         v
┌─────────────────┐
│ AI Agent Reads  │
│ .claude/AGENTS  │
└────────┬────────┘
         │
         v
┌─────────────────┐
│ Task Analysis   │
│ Language: Python│
│ Task: Testing   │
└────────┬────────┘
         │
         v
┌─────────────────┐
│ Selective Load  │
│ Read testing +  │
│ python/testing  │
└────────┬────────┘
         │
         v
┌─────────────────┐
│ Visual Feedback │
│ 📚 Rules Loaded │
│ ✓ Testing       │
│ ✓ Python Tests  │
└────────┬────────┘
         │
         v
┌─────────────────┐
│ Apply Rules     │
│ Generate Code   │
└─────────────────┘

Components

1. Sync Script (sync-ai-rules.sh)

Responsibilities:

  • Detect project language(s)
  • Detect framework(s)
  • Download relevant rules
  • Cache rules locally
  • Generate tool-specific outputs (hierarchical or monolithic)

Key Functions:

detect_language()                  # Auto-detect from project files
detect_frameworks()                # Auto-detect from dependencies
load_base_rules()                  # Always load universal rules
load_language_rules()              # Load if language detected
load_framework_rules()             # Load if framework detected
generate_claude_rules_hierarchical() # Generate on-demand structure
generate_claude_rules_monolithic()   # Generate legacy format
generate_rule_index()              # Generate index.json
generate_agents_md()               # Generate AGENTS.md entry point

2. Base Rules

Characteristics:

  • Language-agnostic
  • Framework-agnostic
  • Always loaded
  • Universal best practices

Content:

  • Git workflow
  • Code quality standards
  • Testing philosophy
  • Security principles
  • Development workflow

3. Language Rules

Characteristics:

  • Language-specific
  • Loaded if language detected
  • Technology-specific tooling

Content Examples:

  • Type system usage
  • Testing frameworks
  • Linting/formatting tools
  • Package management
  • Language-specific patterns

4. Framework Rules

Characteristics:

  • Framework-specific
  • Loaded if framework detected
  • Built on language rules

Content Examples:

  • Framework patterns
  • Best practices
  • Common pitfalls
  • Performance optimization
  • Testing strategies

5. Hierarchical Rule Structure (Task-Level Disclosure)

Generated Structure:

project/.claude/
├── AGENTS.md              # Entry point with discovery instructions
├── commands/
│   └── rules.md           # Visual feedback slash command
├── rules/
│   ├── base/              # Universal rules
│   │   ├── code-quality.md
│   │   ├── testing-philosophy.md
│   │   ├── git-workflow.md
│   │   └── ...
│   ├── languages/
│   │   ├── python/
│   │   │   ├── coding-standards.md
│   │   │   └── testing.md
│   │   └── typescript/
│   │       └── ...
│   ├── frameworks/
│   │   ├── fastapi/
│   │   │   └── best-practices.md
│   │   └── react/
│   │       └── ...
│   └── index.json         # Machine-readable rule index
└── RULES.md               # Legacy monolithic format (deprecated)

Components:

5.1 AGENTS.md (Entry Point)

Purpose: Instructs AI agents on progressive discovery

Content:

  • Progressive disclosure system explanation
  • Discovery process (3 steps: Analyze → Load → Announce)
  • Rule index table showing available rules
  • Usage examples for common scenarios
  • Token efficiency guidance
  • Troubleshooting FAQ

Example workflow:

## Discovery Process

1. Analyze user request for language, framework, task type
2. Load relevant rules using Read tool
3. Announce which rules were loaded
4. Apply rules and cite sources

5.2 index.json (Machine-Readable Index)

Purpose: Enables programmatic rule discovery

Structure:

{
  "generated_at": "2025-12-13 21:09:54 UTC",
  "detected": {
    "languages": ["python"],
    "frameworks": ["fastapi"]
  },
  "rules": {
    "base": [
      {
        "name": "Code Quality",
        "file": ".claude/rules/base/code-quality.md",
        "when": "Every task",
        "always_load": true
      }
    ],
    "languages": {
      "python": {
        "display_name": "Python",
        "rules": [...]
      }
    }
  }
}

Use cases:

  • Automated rule discovery
  • Validation and testing
  • IDE integrations
  • Custom tooling

5.3 rules-config.json (Configuration)

Purpose: Single source of truth for rule metadata

Structure:

{
  "languages": {
    "python": {
      "display_name": "Python",
      "file_patterns": ["*.py"],
      "test_patterns": ["test_*.py"],
      "rules": [
        {
          "name": "Python Coding Standards",
          "file": "languages/python/coding-standards.md",
          "when": "Python files (.py)"
        }
      ]
    }
  },
  "frameworks": {...},
  "base_rules": [...]
}

Benefits:

  • Data-driven generation
  • Easy to extend (just edit JSON)
  • Validation-friendly
  • Reusable across tools

6. Visual Feedback System

Purpose: Show users which rules are actively being applied

Slash Command (.claude/commands/rules.md):

Provides examples of visual feedback patterns:

📚 **Rules Loaded for This Task:**
✓ Code Quality (.claude/rules/base/code-quality.md)
✓ Python Coding Standards (.claude/rules/languages/python/coding-standards.md)

Analyzing your code...

Issues found:
1. Missing type hints 📖 Python Coding Standards: PEP 484
2. Function too long 📖 Code Quality: Max 25 lines

Visual Elements:

  • 📚 Rules loaded announcements
  • ✓ Checkmarks for active rules
  • 📖 Inline citations to specific rules
  • 📊 Token usage reporting (optional)
  • ⚠️ Rule conflicts/exceptions

Performance & Validation

Real-World Test Results

Test Project: Python + FastAPI application Generated: 8 rule files (5 base + 2 Python + 1 FastAPI) Total rules available: ~25,236 tokens (100,947 characters)

Token Savings by Task Type

Task Type Files Loaded Tokens Used Tokens Saved Savings
Code Review 2 files 3,440 21,796 86.4%
Write Tests 2 files 11,163 14,073 55.8%
FastAPI Endpoint 3 files 8,608 16,628 65.9%
Git Commit 2 files 2,618 22,618 89.6%
Average 2.25 files 6,457 18,779 74.4%

Key Findings:

  1. Consistent Savings: All scenarios achieved 55-90% token reduction

  2. Task-Specific Loading: Different tasks load different rule subsets

    • Code reviews: Quality + coding standards (minimal)
    • Testing: Testing philosophy + language testing (moderate)
    • Framework work: Base + language + framework (balanced)
    • Git commits: Workflow + quality (minimal)
  3. Context Window Impact:

    • Before: 25K tokens for rules → 75K available for code
    • After: 6K tokens for rules → 94K available for code
    • Result: 59% more context for code analysis

Performance Benchmarks

Phase 1 (Project-Level):

  • Initial sync (remote): ~2-5 seconds
  • Cached sync (local): ~0.5-1 second
  • Rule generation: ~1-2 seconds

Phase 2 (Task-Level):

  • Rule discovery: <100ms (read AGENTS.md)
  • Selective loading: 2-3 file reads (~200-300ms)
  • Total overhead: <500ms per task

Total latency impact: Negligible (<1 second)

Validation Checklist

Real-world testing validated:

  • Detection accuracy: Python + FastAPI correctly identified
  • File generation: All 8 relevant rules copied to .claude/rules/
  • Index creation: index.json generated with proper metadata
  • Entry point: AGENTS.md created with discovery instructions
  • Structure integrity: Hierarchical organization maintained
  • Token savings: 55-90% reduction measured across scenarios
  • Config-driven: rules-config.json successfully drives generation
  • Backwards compatible: Monolithic format still available

Scalability Analysis

Current System:

  • Supports 8+ languages
  • Supports 12+ frameworks
  • ~50 rule files in repository
  • Generated output: 8-12 files per project

Projected at Scale:

  • 50 languages: ✅ Scales linearly (still loads 8-12 files)
  • 100 frameworks: ✅ Scales linearly (selective loading)
  • 500+ rule files: ✅ Only 2-3 files loaded per task

Bottlenecks: None identified. System scales horizontally.

Token Efficiency Comparison

Scenario: Full-Stack Application (Python + TypeScript + React + FastAPI)

Approach Rules Loaded Tokens Code Context
No Progressive Disclosure All 50+ files ~100K 100K (50%)
Project-Level Only 15 files ~35K 165K (83%)
Project + Task-Level 2-3 files ~8K 192K (96%)

Improvement: 96% of context available for code vs 50% without progressive disclosure

Extension Points

Adding a New Language

  1. Create languages/{language}/ directory
  2. Add coding-standards.md
  3. Add testing.md
  4. Update detect_language() in sync script
  5. Update documentation

Adding a New Framework

  1. Create frameworks/{framework}/ directory
  2. Add best-practices.md
  3. Update detect_frameworks() in sync script
  4. Update documentation

Adding a New Tool

  1. Create tools/{tool}/ directory
  2. Add template files
  3. Add generate_{tool}_rules() function
  4. Update main sync logic
  5. Update documentation

Design Decisions

Why Bash Script?

  • Portability: Works on any Unix-like system
  • Simplicity: No runtime dependencies
  • Transparency: Easy to read and audit
  • Offline Support: Can work with cached rules

Why Markdown?

  • Readability: Human-readable format
  • Compatibility: Works with all AI tools
  • Version Control: Git-friendly
  • Extensibility: Easy to add metadata

Why Progressive Disclosure?

  • Reduces Noise: AI sees only relevant rules
  • Improves Accuracy: Focused instructions
  • Scales Better: Works across many projects
  • Faster Loading: Less data to process

Why Detection-Based?

  • Zero Configuration: Works out of the box
  • Automatic Updates: Adapts as project evolves
  • Consistent: Same logic across projects
  • Override-able: Can use config when needed

Configuration

Auto-Detection (Default)

./sync-ai-rules.sh
# Detects: pyproject.toml → Python
# Detects: dependencies → Django, FastAPI
# Loads: base/* + languages/python/* + frameworks/{django,fastapi}/*

Manual Configuration

{
  "languages": ["python", "typescript"],
  "frameworks": ["django", "react"],
  "exclude": ["testing-mocking"],
  "custom_rules": ["https://company.com/custom.md"]
}

Environment Variables

export AI_RULES_REPO="https://your-org.com/rules"
./sync-ai-rules.sh

Scaling

Organization-Wide Deployment

  1. Fork repository
  2. Customize base rules
  3. Add organization-specific rules
  4. Distribute sync script to teams
  5. Automate with CI/CD

Multi-Project Support

# Monorepo with multiple languages
monorepo/
├── backend/ (Python + FastAPI)
├── frontend/ (TypeScript + React)
└── sync-ai-rules.sh (detects both)

Caching Strategy

.ai-rules/.cache/
├── base/
├── languages/
└── frameworks/

# Downloaded once, used offline
# Re-downloaded on cache miss

Security

No Code Execution

  • Rules are markdown only
  • No executable code in rules
  • Safe to load from remote sources

HTTPS by Default

  • All downloads use HTTPS
  • Validates SSL certificates
  • Fails closed on network errors

Audit Trail

  • All downloads logged
  • Cache timestamps tracked
  • Version information included

Performance

Optimization Strategies

  1. Caching: Download once, use many times
  2. Lazy Loading: Only load what's needed
  3. Parallel Downloads: Fetch rules concurrently
  4. Compression: Minimize network transfer

Benchmarks

  • Initial sync: ~2-5 seconds
  • Cached sync: ~0.5-1 second
  • Generated output: ~1-2 seconds

Completed Features

✅ Phase 1 & 2 Progressive Disclosure (Implemented)

  • Project-level disclosure: Auto-detect and load only relevant languages/frameworks
  • Task-level disclosure: On-demand loading of 2-3 rule files per task
  • Hierarchical structure: .claude/rules/ directory with organized subdirectories
  • AGENTS.md entry point: Discovery instructions for AI agents
  • Machine-readable index: index.json for programmatic access
  • Config-driven generation: rules-config.json as single source of truth
  • Visual feedback system: /rules slash command with examples
  • Real-world validation: Tested with 55-90% token savings
  • Backwards compatibility: Monolithic format still available

Future Enhancements

Short-Term (Next 3 Months)

  • Cursor/Copilot hierarchical formats: Extend task-level disclosure to other tools
  • Rule versioning: Track rule changes and breaking changes
  • Validation tooling: JSON Schema for rules-config.json
  • GitHub Action: Automate rule sync in CI/CD
  • Usage analytics: Track which rules are most referenced

Medium-Term (3-6 Months)

  • VS Code extension: In-editor rule browsing and discovery
  • Rule conflict detection: Identify and resolve contradictory rules
  • A/B testing framework: Test different rule formulations
  • Cloud provider rules expansion: Azure, GCP beyond AWS
  • Domain-specific rules: Fintech, healthcare, e-commerce templates

Long-Term (6+ Months)

  • Web dashboard: Browse rules, view analytics, manage configuration
  • AI-powered rule suggestions: Recommend rules based on codebase analysis
  • Team collaboration features: Share custom rules across organization
  • Compliance frameworks: HIPAA, SOC 2, PCI-DSS rule sets
  • Multi-language monorepo support: Detect and handle polyglot projects

Extensibility

The architecture currently supports and encourages:

✅ Already Supported:

  • Language-specific rules (8+ languages)
  • Framework-specific rules (12+ frameworks)
  • Cloud provider rules (AWS with Well-Architected)
  • Tool-specific outputs (Claude, Cursor, Copilot)

🔜 Easily Extensible:

  • Domain-specific rules (fintech, healthcare, e-commerce)
  • Compliance frameworks (HIPAA, SOC 2, GDPR)
  • Company-specific standards
  • Team-level customization
  • Custom rule categories (accessibility, i18n, etc.)

References