A basic node-based graph processing engine for building flexible AI-powered workflows with visual graph editing and conversational interfaces.
- Professional React Flow-based interface with drag-drop node creation
- Real-time node property editing with validation
- Visual connection management with port-specific targeting
- Clean folder-tab style node layout with organized input/output ports
- Interactive chat interface for executing graphs conversationally
- Dynamic parameter exposure from graph input nodes
- Automatic conversation context management
- Support for various graph types (interactive, write-only, read-only)
- Multiple LLM Providers: Ollama (local), Claude/Anthropic, AWS Bedrock
- Unified Context System with context_key and context_data supremacy model
- Immutable content-addressable contexts for efficient memory management
- Smart context management with provider-native optimization
- Context inspection and passthrough capabilities for debugging
- Multiple chat node types (smart_chat, immutable_chat) with streaming support
- Create, edit, copy, delete, and execute graphs
- Flexible input system (labels, ordinals, node IDs, chat conventions)
- Graph execution by name for API integration
- Real-time parameter configuration and execution
- Docker and Docker Compose
- Node.js 18+ (for frontend)
- Python 3.9+ with Poetry (for backend)
Prerequisites: Docker and Docker Compose
-
Clone the repository:
git clone git@github.com:chaboud/nodecules.git cd nodecules -
Configure environment (optional, for AI providers):
# Copy the example environment file cp .env.example .env # Edit .env to add your API keys (see AI Provider Setup below) # nano .env
-
Start with Docker Compose:
docker-compose up -d
This starts:
- PostgreSQL database (port 5432)
- Redis cache (port 6379)
- Backend API (port 8000)
- Frontend web app (port 3000)
-
Verify setup (database initializes automatically):
# Check logs to ensure database initialization completed docker-compose logs backend | grep -E "(Database initialization|โ |โ)"
-
Access the application:
- Web Interface: http://localhost:3000
- API Docs: http://localhost:8000/docs
- Backend API: http://localhost:8000
-
Test it works:
- Go to http://localhost:3000
- Click "Import Graph" (green button)
- Or test the API directly:
curl http://localhost:8000/api/v1/plugins/nodes | jq - Import
/examples/simple_text_processing.nodecules.json - Click "Execute" to test
For development with hot reloading:
-
Start infrastructure only:
docker-compose up -d postgres redis
-
Backend development:
cd backend poetry install python scripts/init_db.py # Initialize database poetry run uvicorn nodecules.main:app --reload
-
Frontend development (new terminal):
cd frontend npm install npm run dev
To use external AI providers, configure your API keys in the .env file:
After configuring your API keys, run the automated test:
# Test Claude integration end-to-end
./test_claude_integration.sh
# Or test API manually
curl -X GET "http://localhost:8000/api/v1/plugins/nodes/immutable_chat" | jq- Get your API key from Anthropic Console
- Add to
.env:ANTHROPIC_API_KEY=sk-ant-api03-your-key-here
- Use
anthropicprovider in chat nodes with models:claude-3-5-haiku-20241022(latest Haiku - fast, cheaper)claude-3-5-sonnet-20241022(latest Sonnet - balanced, most popular)claude-3-5-sonnet-20240620(previous Sonnet version)claude-3-haiku-20240307(legacy Haiku)claude-3-opus-20240229(most capable, expensive)
Option 1: Bearer Token (Recommended - Simple)
- Get your API key from AWS Console > Bedrock > API Keys
- Add to
.env:AWS_BEARER_TOKEN_BEDROCK=your-bedrock-api-key-here AWS_REGION=us-east-1
Option 2: Full AWS Credentials
- Configure AWS credentials (CLI or environment)
- Add to
.env:AWS_ACCESS_KEY_ID=your-access-key AWS_SECRET_ACCESS_KEY=your-secret-key AWS_REGION=us-east-1
Usage:
- Use
bedrockprovider with models like:us.anthropic.claude-3-5-haiku-20241022-v1:0(latest)us.anthropic.claude-3-5-sonnet-20241022-v1:0anthropic.claude-3-haiku-20240307-v1:0(legacy)
- No API key needed
- Install Ollama locally
- Pull models:
ollama pull llama3.2:3b - Use
ollamaprovider (default)
ANTHROPIC_API_KEY: Claude API accessAWS_BEARER_TOKEN_BEDROCK: Bedrock API key (preferred)AWS_ACCESS_KEY_ID: Bedrock access key (fallback)AWS_SECRET_ACCESS_KEY: Bedrock secret key (fallback)AWS_REGION: Bedrock region (default: us-east-1)DATABASE_URL: PostgreSQL connectionREDIS_URL: Redis connection
The system uses a unified context model where you can:
- Inspect contexts: Connect
context_dataoutput to debug full conversation history - Pass contexts: Use
context_keyfor efficient storage orcontext_datafor full control - Provider flexibility: Switch between Ollama, Claude, and Bedrock seamlessly
- Supremacy model: When both context_key and context_data are provided, context_key takes precedence (configurable)
Database Issues:
# Manually initialize/fix database
docker-compose exec backend poetry run python scripts/init_db.py
# Reset database completely
docker-compose down
docker volume rm nodecules_postgres_data
docker-compose up -dMissing Tables Error:
- The system now auto-creates missing tables on startup
- If issues persist, run the manual database initialization above
Container Issues:
# Rebuild containers with latest code
docker-compose down
docker-compose up -d --build- Navigate to http://localhost:3000
- Click "New Graph" to create a workflow
- Drag nodes from the palette to build your graph
- Configure node parameters in the properties panel
- Execute graphs and view results in real-time
- Navigate to the "Chat" tab in the application
- Select a graph with proper chat conventions:
- Input node with ID
chat_message(receives user messages) - Output node with ID
chat_response(displays AI responses) - Optional input node
chat_context(for conversation memory)
- Input node with ID
- Additional input nodes appear as user controls (temperature sliders, model selectors, etc.)
- Type messages and interact with your graph conversationally
# Basic execution
curl -s -X POST http://localhost:8000/api/v1/executions/ \
-H "Content-Type: application/json" \
-d '{"graph_id": "My Workflow"}' | \
jq -r '.outputs | to_entries[] | select(.value.label == "Output") | .value.result'
# With custom inputs
curl -s -X POST http://localhost:8000/api/v1/executions/ \
-H "Content-Type: application/json" \
-d '{
"graph_id": "Text Processing",
"inputs": {
"input_1": "Hello World",
"greeting": "Custom input by label"
}
}' | jq '.outputs'curl -s "http://localhost:8000/api/v1/graphs/My%20Workflow/schema" | jq '.'curl -s "http://localhost:8000/api/v1/graphs/" | jq '.[].name'- Input Node - Collects user input with configurable labels and data types
- Output Node - Displays results with custom labels
- Text Transform - Text operations (uppercase, lowercase, title case)
- Text Filter - Pattern-based text filtering with regex support
- Text Concat - Multi-input text concatenation
- Smart Chat - Context-aware conversational AI with provider adapters
- Immutable Chat - Memory-efficient chat with content-addressable contexts
- Support for parameter inputs (model, temperature, system_prompt)
The system supports multiple ways to provide inputs:
- By Label - Use friendly names:
{"greeting": "Hello"} - By Ordinal - Use position:
{"input_1": "Hello", "input_2": "World"} - By Node ID - Direct node reference:
{"node_abc123": "Hello"} - Chat Convention - Special IDs for chat interface:
{"chat_message": "Hello", "chat_context": "..."}
nodecules/
โโโ backend/
โ โโโ nodecules/ # Main Python package
โ โ โโโ core/ # Execution engine (topological sort)
โ โ โโโ plugins/ # Built-in and custom nodes
โ โ โโโ api/ # FastAPI routes
โ โ โโโ models/ # SQLAlchemy schemas
โ โโโ tests/
โโโ frontend/
โ โโโ src/
โ โ โโโ features/graph/ # Graph editor components
โ โ โโโ services/ # API client
โ โ โโโ stores/ # Zustand state management
โ โโโ public/
โโโ plugins/ # External plugin directory
โโโ docker-compose.yml # Development environment
Auto-Discovery (Recommended):
- Drop any
.pyfile withBaseNodesubclasses into theplugins/directory - Restart the backend - nodes are automatically discovered and loaded
- No configuration files required!
YAML-Based Plugins (For complex plugins):
- Create a plugin directory in
plugins/ - Implement node class extending
BaseNode - Define node specification with inputs/outputs/parameters
- Register in
plugin.yaml
Example node:
class MyCustomNode(BaseNode):
NODE_TYPE = "my_custom"
def __init__(self):
spec = NodeSpec(
node_type=self.NODE_TYPE,
display_name="My Custom Node",
description="Does custom processing",
inputs=[PortSpec(name="input", data_type=DataType.TEXT)],
outputs=[PortSpec(name="output", data_type=DataType.TEXT)]
)
super().__init__(spec)
async def execute(self, context: ExecutionContext, node_data: NodeData) -> Dict[str, Any]:
input_value = context.get_input_value(node_data.node_id, "input")
result = self.process(input_value)
return {"output": result}This example graph demonstrates text processing:
# Execute with custom input
curl -s -X POST http://localhost:8000/api/v1/executions/ \
-H "Content-Type: application/json" \
-d '{
"graph_id": "Potato farmer!",
"inputs": {
"input_1": "Elite potato farmers from Idaho",
"input_2": "UNITE FOR BETTER SPUDS"
}
}' | jq -r '.outputs | to_entries[] | select(.value.label == "Output") | .value.result'Output: ELITE POTATO FARMERS FROM IDAHO UNITE FOR BETTER SPUDS !!!
- Backend: FastAPI + SQLAlchemy + PostgreSQL
- Frontend: React 18 + TypeScript + React Flow + Tailwind CSS
- State Management: Zustand
- Execution Engine: Topological sort with async processing (Kahn's algorithm)
- Data Storage: PostgreSQL (graphs/executions) + Redis (caching)
- Development: Docker Compose + Poetry + Vite
- Visual Graph Editor - Professional React Flow interface
- Chat Interface - Conversational graph execution
- AI Integration - Ollama support with context management
- Flexible Input System - Multiple input resolution methods
- Content-Addressable Contexts - Efficient memory management
- Streaming Responses - Real-time response generation with thinking indicators
- Setup Scripts - Database initialization with example graphs
- Additional LLM Providers - Anthropic Claude, OpenAI, Azure OpenAI
- Multi-modal Support - Images, documents, audio processing
- Conditional logic and branching nodes
- Loop and iteration capabilities
- External API connectors
- File I/O operations
- Advanced prompt engineering tools
Complete API documentation is available at http://localhost:8000/docs when running the development server.
Key endpoints:
POST /api/v1/executions/- Execute graphsGET/POST /api/v1/graphs/- Manage graphsGET /api/v1/graphs/{name}/schema- Get graph schemaGET /api/v1/plugins/nodes- List available node types
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests
- Submit a pull request
MIT License - see LICENSE file for details.
Status: Production ready with AI-powered graph processing and chat interface
Next Milestone: Streaming responses and enhanced setup tooling