Skip to content

chaboud/nodecules

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

19 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

Nodecules [under construction and very unstable]

A basic node-based graph processing engine for building flexible AI-powered workflows with visual graph editing and conversational interfaces.

๐Ÿš€ Features

๐ŸŽจ Visual Graph Editor

  • Professional React Flow-based interface with drag-drop node creation
  • Real-time node property editing with validation
  • Visual connection management with port-specific targeting
  • Clean folder-tab style node layout with organized input/output ports

๐Ÿ’ฌ Chat Interface

  • Interactive chat interface for executing graphs conversationally
  • Dynamic parameter exposure from graph input nodes
  • Automatic conversation context management
  • Support for various graph types (interactive, write-only, read-only)

๐Ÿค– AI Integration

  • Multiple LLM Providers: Ollama (local), Claude/Anthropic, AWS Bedrock
  • Unified Context System with context_key and context_data supremacy model
  • Immutable content-addressable contexts for efficient memory management
  • Smart context management with provider-native optimization
  • Context inspection and passthrough capabilities for debugging
  • Multiple chat node types (smart_chat, immutable_chat) with streaming support

๐Ÿ”ง Advanced Graph Management

  • Create, edit, copy, delete, and execute graphs
  • Flexible input system (labels, ordinals, node IDs, chat conventions)
  • Graph execution by name for API integration
  • Real-time parameter configuration and execution

๐Ÿ Quick Start

Prerequisites

  • Docker and Docker Compose
  • Node.js 18+ (for frontend)
  • Python 3.9+ with Poetry (for backend)

Installation

๐Ÿš€ Quick Start (Docker - Recommended)

Prerequisites: Docker and Docker Compose

  1. Clone the repository:

    git clone git@github.com:chaboud/nodecules.git
    cd nodecules
  2. Configure environment (optional, for AI providers):

    # Copy the example environment file
    cp .env.example .env
    
    # Edit .env to add your API keys (see AI Provider Setup below)
    # nano .env
  3. Start with Docker Compose:

    docker-compose up -d

    This starts:

    • PostgreSQL database (port 5432)
    • Redis cache (port 6379)
    • Backend API (port 8000)
    • Frontend web app (port 3000)
  4. Verify setup (database initializes automatically):

    # Check logs to ensure database initialization completed
    docker-compose logs backend | grep -E "(Database initialization|โœ…|โŒ)"
  5. Access the application:

  6. Test it works:

    • Go to http://localhost:3000
    • Click "Import Graph" (green button)
    • Or test the API directly: curl http://localhost:8000/api/v1/plugins/nodes | jq
    • Import /examples/simple_text_processing.nodecules.json
    • Click "Execute" to test

๐Ÿ› ๏ธ Development Setup

For development with hot reloading:

  1. Start infrastructure only:

    docker-compose up -d postgres redis
  2. Backend development:

    cd backend
    poetry install
    python scripts/init_db.py  # Initialize database
    poetry run uvicorn nodecules.main:app --reload
  3. Frontend development (new terminal):

    cd frontend  
    npm install
    npm run dev

๐Ÿ”ง AI Provider Setup

To use external AI providers, configure your API keys in the .env file:

๐Ÿงช Quick Test (Recommended)

After configuring your API keys, run the automated test:

# Test Claude integration end-to-end
./test_claude_integration.sh

# Or test API manually
curl -X GET "http://localhost:8000/api/v1/plugins/nodes/immutable_chat" | jq

Claude/Anthropic

  1. Get your API key from Anthropic Console
  2. Add to .env:
    ANTHROPIC_API_KEY=sk-ant-api03-your-key-here
  3. Use anthropic provider in chat nodes with models:
    • claude-3-5-haiku-20241022 (latest Haiku - fast, cheaper)
    • claude-3-5-sonnet-20241022 (latest Sonnet - balanced, most popular)
    • claude-3-5-sonnet-20240620 (previous Sonnet version)
    • claude-3-haiku-20240307 (legacy Haiku)
    • claude-3-opus-20240229 (most capable, expensive)

AWS Bedrock

Option 1: Bearer Token (Recommended - Simple)

  1. Get your API key from AWS Console > Bedrock > API Keys
  2. Add to .env:
    AWS_BEARER_TOKEN_BEDROCK=your-bedrock-api-key-here
    AWS_REGION=us-east-1

Option 2: Full AWS Credentials

  1. Configure AWS credentials (CLI or environment)
  2. Add to .env:
    AWS_ACCESS_KEY_ID=your-access-key
    AWS_SECRET_ACCESS_KEY=your-secret-key
    AWS_REGION=us-east-1

Usage:

  • Use bedrock provider with models like:
    • us.anthropic.claude-3-5-haiku-20241022-v1:0 (latest)
    • us.anthropic.claude-3-5-sonnet-20241022-v1:0
    • anthropic.claude-3-haiku-20240307-v1:0 (legacy)

Ollama (Local)

  • No API key needed
  • Install Ollama locally
  • Pull models: ollama pull llama3.2:3b
  • Use ollama provider (default)

Environment Variables Reference

  • ANTHROPIC_API_KEY: Claude API access
  • AWS_BEARER_TOKEN_BEDROCK: Bedrock API key (preferred)
  • AWS_ACCESS_KEY_ID: Bedrock access key (fallback)
  • AWS_SECRET_ACCESS_KEY: Bedrock secret key (fallback)
  • AWS_REGION: Bedrock region (default: us-east-1)
  • DATABASE_URL: PostgreSQL connection
  • REDIS_URL: Redis connection

๐Ÿ”ง Configuration

The system uses a unified context model where you can:

  • Inspect contexts: Connect context_data output to debug full conversation history
  • Pass contexts: Use context_key for efficient storage or context_data for full control
  • Provider flexibility: Switch between Ollama, Claude, and Bedrock seamlessly
  • Supremacy model: When both context_key and context_data are provided, context_key takes precedence (configurable)

๐Ÿ”ง Troubleshooting

Database Issues:

# Manually initialize/fix database
docker-compose exec backend poetry run python scripts/init_db.py

# Reset database completely
docker-compose down
docker volume rm nodecules_postgres_data
docker-compose up -d

Missing Tables Error:

  • The system now auto-creates missing tables on startup
  • If issues persist, run the manual database initialization above

Container Issues:

# Rebuild containers with latest code
docker-compose down
docker-compose up -d --build

๐ŸŽฏ Usage

Visual Graph Editor

  1. Navigate to http://localhost:3000
  2. Click "New Graph" to create a workflow
  3. Drag nodes from the palette to build your graph
  4. Configure node parameters in the properties panel
  5. Execute graphs and view results in real-time

Chat Interface

  1. Navigate to the "Chat" tab in the application
  2. Select a graph with proper chat conventions:
    • Input node with ID chat_message (receives user messages)
    • Output node with ID chat_response (displays AI responses)
    • Optional input node chat_context (for conversation memory)
  3. Additional input nodes appear as user controls (temperature sliders, model selectors, etc.)
  4. Type messages and interact with your graph conversationally

API Integration

Execute a graph by name:

# Basic execution
curl -s -X POST http://localhost:8000/api/v1/executions/ \
  -H "Content-Type: application/json" \
  -d '{"graph_id": "My Workflow"}' | \
  jq -r '.outputs | to_entries[] | select(.value.label == "Output") | .value.result'

# With custom inputs
curl -s -X POST http://localhost:8000/api/v1/executions/ \
  -H "Content-Type: application/json" \
  -d '{
    "graph_id": "Text Processing",
    "inputs": {
      "input_1": "Hello World",
      "greeting": "Custom input by label"
    }
  }' | jq '.outputs'

Get graph schema:

curl -s "http://localhost:8000/api/v1/graphs/My%20Workflow/schema" | jq '.'

List available graphs:

curl -s "http://localhost:8000/api/v1/graphs/" | jq '.[].name'

Built-in Node Types

Core Nodes

  • Input Node - Collects user input with configurable labels and data types
  • Output Node - Displays results with custom labels
  • Text Transform - Text operations (uppercase, lowercase, title case)
  • Text Filter - Pattern-based text filtering with regex support
  • Text Concat - Multi-input text concatenation

AI Nodes

  • Smart Chat - Context-aware conversational AI with provider adapters
  • Immutable Chat - Memory-efficient chat with content-addressable contexts
  • Support for parameter inputs (model, temperature, system_prompt)

Input Resolution Methods

The system supports multiple ways to provide inputs:

  1. By Label - Use friendly names: {"greeting": "Hello"}
  2. By Ordinal - Use position: {"input_1": "Hello", "input_2": "World"}
  3. By Node ID - Direct node reference: {"node_abc123": "Hello"}
  4. Chat Convention - Special IDs for chat interface: {"chat_message": "Hello", "chat_context": "..."}

๐Ÿ”ง Development

Project Structure

nodecules/
โ”œโ”€โ”€ backend/
โ”‚   โ”œโ”€โ”€ nodecules/           # Main Python package
โ”‚   โ”‚   โ”œโ”€โ”€ core/            # Execution engine (topological sort)
โ”‚   โ”‚   โ”œโ”€โ”€ plugins/         # Built-in and custom nodes
โ”‚   โ”‚   โ”œโ”€โ”€ api/             # FastAPI routes
โ”‚   โ”‚   โ””โ”€โ”€ models/          # SQLAlchemy schemas
โ”‚   โ””โ”€โ”€ tests/
โ”œโ”€โ”€ frontend/
โ”‚   โ”œโ”€โ”€ src/
โ”‚   โ”‚   โ”œโ”€โ”€ features/graph/  # Graph editor components
โ”‚   โ”‚   โ”œโ”€โ”€ services/        # API client
โ”‚   โ”‚   โ””โ”€โ”€ stores/          # Zustand state management
โ”‚   โ””โ”€โ”€ public/
โ”œโ”€โ”€ plugins/                 # External plugin directory
โ””โ”€โ”€ docker-compose.yml       # Development environment

Adding Custom Nodes

Auto-Discovery (Recommended):

  1. Drop any .py file with BaseNode subclasses into the plugins/ directory
  2. Restart the backend - nodes are automatically discovered and loaded
  3. No configuration files required!

YAML-Based Plugins (For complex plugins):

  1. Create a plugin directory in plugins/
  2. Implement node class extending BaseNode
  3. Define node specification with inputs/outputs/parameters
  4. Register in plugin.yaml

Example node:

class MyCustomNode(BaseNode):
    NODE_TYPE = "my_custom"
    
    def __init__(self):
        spec = NodeSpec(
            node_type=self.NODE_TYPE,
            display_name="My Custom Node",
            description="Does custom processing",
            inputs=[PortSpec(name="input", data_type=DataType.TEXT)],
            outputs=[PortSpec(name="output", data_type=DataType.TEXT)]
        )
        super().__init__(spec)
    
    async def execute(self, context: ExecutionContext, node_data: NodeData) -> Dict[str, Any]:
        input_value = context.get_input_value(node_data.node_id, "input")
        result = self.process(input_value)
        return {"output": result}

๐ŸŽญ Examples

Sample Workflow: "Potato Farmer!"

This example graph demonstrates text processing:

# Execute with custom input
curl -s -X POST http://localhost:8000/api/v1/executions/ \
  -H "Content-Type: application/json" \
  -d '{
    "graph_id": "Potato farmer!",
    "inputs": {
      "input_1": "Elite potato farmers from Idaho",
      "input_2": "UNITE FOR BETTER SPUDS"
    }
  }' | jq -r '.outputs | to_entries[] | select(.value.label == "Output") | .value.result'

Output: ELITE POTATO FARMERS FROM IDAHO UNITE FOR BETTER SPUDS !!!

๐Ÿ› ๏ธ Architecture

  • Backend: FastAPI + SQLAlchemy + PostgreSQL
  • Frontend: React 18 + TypeScript + React Flow + Tailwind CSS
  • State Management: Zustand
  • Execution Engine: Topological sort with async processing (Kahn's algorithm)
  • Data Storage: PostgreSQL (graphs/executions) + Redis (caching)
  • Development: Docker Compose + Poetry + Vite

๐Ÿšง Roadmap

โœ… Completed Features

  • Visual Graph Editor - Professional React Flow interface
  • Chat Interface - Conversational graph execution
  • AI Integration - Ollama support with context management
  • Flexible Input System - Multiple input resolution methods
  • Content-Addressable Contexts - Efficient memory management

๐ŸŽฏ Next Phase: Enhanced AI Experience

  • Streaming Responses - Real-time response generation with thinking indicators
  • Setup Scripts - Database initialization with example graphs
  • Additional LLM Providers - Anthropic Claude, OpenAI, Azure OpenAI
  • Multi-modal Support - Images, documents, audio processing

๐Ÿ”ฎ Future Enhancements

  • Conditional logic and branching nodes
  • Loop and iteration capabilities
  • External API connectors
  • File I/O operations
  • Advanced prompt engineering tools

๐Ÿ“„ API Documentation

Complete API documentation is available at http://localhost:8000/docs when running the development server.

Key endpoints:

  • POST /api/v1/executions/ - Execute graphs
  • GET/POST /api/v1/graphs/ - Manage graphs
  • GET /api/v1/graphs/{name}/schema - Get graph schema
  • GET /api/v1/plugins/nodes - List available node types

๐Ÿค Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests
  5. Submit a pull request

๐Ÿ“ License

MIT License - see LICENSE file for details.


Status: Production ready with AI-powered graph processing and chat interface
Next Milestone: Streaming responses and enhanced setup tooling

About

Play space for node compute system

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors