- Current Version: 0.6
This is a work in progress as I continue to research, experiment and utilize AI impelementations. Everything runs locally, and can be used as individual services, via OpenWeb-UI or N8N.
- Open-WebUI – Web interface for running and managing LLMs like Ollama.
- Ollama – Local LLM runner/manager for models like LLaMA, Mistral, etc.
- Whisper – OpenAI’s speech/video file/audio file-to-text (audio transcription).
- N8N – Workflow automation tool, like open-source Zapier.
- Redis – In-memory key-value store, often used for caching.
- Postgres – Powerful open-source relational database.
- SearXNG – Privacy-respecting metasearch engine.
- Stable Diffusion – Text-to-image AI generator.
- Crawl4AI – Web crawler + requester
Haven't put much up in this public repo yet, but below is what I have currently available.
/GainSec-Local-AI-Stack/n8n-workflows/OpenWebUI-UploadTo-WhisperServer-N8N-Workflow.json - Allows you to integrate your own Whisper Server into OpenWebUI via N8N. Upload the mp3 to Open-WebUI, it causes a local file trigger in N8N which reads the MP3 file, sends to the whisper server, extracts the transcript from the response and writes it to a file.
/GainSec-Local-AI-Stack/n8n-workflows/crawl4ai_outputfiles-final.json - Used along with the Open-WebUI Function, this takes the recieved input, transforms it into a useful format, sends it to crawl4AI API, and outputs a bunch of different files from Crawl4AI's response. It creates a raw output file, a URL list (uses a code node to extract ALL Urls), a Link list (grabs all Links from crawl4ai structured output), markdown file, HTML file and Screenshot of Website.
[/GainSec-Local-AI-Stack/n8n-workflows/V1_Local_RAG_AI_Agent.json] (https://github.com/GainSec/GainSec-Local-AI-Stack/blob/main/n8n-workflows/V1_Local_RAG_AI_Agent.json) - Local RAG AI Agent implementation from Self-hosted AI Package
[/GainSec-Local-AI-Stack/n8n-workflows/V3_Local_Agentic_RAG_AI_Agent.json] (https://github.com/GainSec/GainSec-Local-AI-Stack/blob/main/n8n-workflows/V3_Local_Agentic_RAG_AI_Agent.json) - Local RAG AI Agent implementation from Self-hosted AI Package
/GainSec-Local-AI-Stack/openweb-ui-functions/crawl4ai-openwebui-function.py - Open-WebUI Function that takes user input and sends it to N8N to be used by Crawl4AI
/GainSec-Local-AI-Stack/openweb-ui-functions/n8npipe-local-agentic-rag.py - Open-WebUI Function that pipes into the RAG AI Agentic Implementations from Self-hosted AI Package
/GainSec-Local-AI-Stack/setup_dirs.sh - Script to create all the required directories and give them proper permissions
/GainSec-Local-AI-Stack/example.env - Example .env file that is used alongside the docker docker compose
/GainSec-Local-AI-Stack/docker-compose.yml - The docker compose that is used to pull-up all the services
/GainSec-Local-AI-Stack/start_services.py - Python script to properly spin up all the services + run some other checks
/GainSec-Local-AI-Stack/shared - Where most data lives that is used across services or exported/output
/GainSec-Local-AI-Stack/shared/crawl/ - Where the N8N crawl4ai workflow outputs files to along with some subdirectories: html, image, links, markdown, pdf, urls (the raw output is saved into /shared/crawl/)
/GainSec-Local-AI-Stack/shared/transcripts - Where the N8N Whisper workflow outputs transcriptions
/GainSec-Local-AI-Stack/shared/uploads - Where Open-WebUI stores uploads which are then grabbed by the N8N Whisper workflow
-
Follow Ollama Instructions to use your GPU with docker: Instructions
-
cp example.env .env && nano .env- Modify values as seen fit -
chmod +x setup_dirs.sh && ./setup_dirs.sh- Create directories required by the services/volumes if they don't exist -
sudo python3 start_services.py --profile=gpu-nvidia- For other profile options see ColeAm00's original script I modified Other Profiles -
Navigate to N8N, create a local account.
-
Now hit the
+button on the top left, add at the least, these three credentials:ollama account - Base Url: http://ollama:11434/,OpenAPI Account - Base URL: http://ollama:11434/v1(API Key doesn't matter unless you set it to something),Postgres account - Host: postgres, Database: postgres, User: postgres, Password: whatever you set in your .env -
Feel free to import the included workflows as well at this point.
-
Navigate to Open-WebUI, create a local account.
-
Click your username on the bottom left then select
Admin Panel. -
Select
Settingsthen selectConnections. Deactivate OpenAI API and activate Ollama API. Hit the+button and enterhttp://ollama:11434in URL. Toggle the switch on and hit the refresh button next to the toggle to confirm connection. Under Connections is Models, which should show your list of available models. If not, enter the ollama docker and download some. Hit Save. -
Select
Toolsand hit the+button. Enter the URL ashttp://crawl4ai:11235, add a name of description ofcrawl4ai, turn on the toggle and hit the refresh button. (Make sure you leave the openapi.json default value there). Change visibility to Public. Hit Save. -
Repeat the same process for Whisper. URL:
http://whisper-server:9000and stable diffusion. URL:http://sd-a1111:8080Hit Save. -
Now Select
Web Search. Change theWeb Search Enginevalue tosearxngand theSearxng Query URLtohttp://searxng:8080/search?q=<query>&format=json. Turn the toggle on and hit save. -
Select
Audioand change the STT Model tolarge- Note this is separate then the Whisper-Server we're also running, as Open-WebUI has a built-in whisper server. If you end up not doing anything custom to your whisper service then feel free to just use this one rather then a whole separate service. Hit Save -
Select
ImagesTurn both toggles to on, and change theImage Generation EnginetoAutomatic1111. Note that the stable diffusion service atm takes the longest to spin up because it builds it each time. I will fix that in the future and will hopefully find better inference files so newer and better models can be used. WithinAUTOMATIC1111 Base URLenterhttp://sd-a1111:8080/and hit the refresh to the right. At this time, you should see the default model asv1-5-pruned-*. Hit Save. -
To the left of
SettingsselectFunctions. Hit the+button and paste then8npipe-local-agentic-rag.pyscript into it. Name itN8N-Pipe-Local-Agentic-Rag-PipeGive it any description and hit save. Now hit the gear next to it and enter the N8N Webhook URL from the N8N Workflow, toggle it to enabled and hit save. Do the same for thecrawl4ai-openwebui-function.pyfuction. -
Now hit
New Chatselect your model or Fuction, select your tool (Web Search, Image, or hit the+and select whisper-server or crawl4ai) and enjoy! -
If you want to hit any of the services directory they can be hit
| Service | Container Name | Host Port | Container Port | URL |
|---|---|---|---|---|
| n8n | n8n |
5678 | 5678 | http://localhost:5678 |
| ollama | ollama |
11434 | 11434 | http://localhost:11434 |
| open-webui | open-webui |
3000 | 8080 | http://localhost:3000 |
| qdrant | qdrant |
6333 | 6333 | http://localhost:6333 |
| neo4j (browser) | neo4j |
7474 | 7474 | http://localhost:7474 |
| neo4j (bolt protocol) | neo4j |
7687 | 7687 | bolt://localhost:7687 |
| postgres | postgres |
5433 | 5432 | psql://localhost:5433 |
| searxng | searxng |
8080 | 8080 | http://localhost:8080 |
| stable-diffusion (UI) | sd-a1111 |
7860 | 7860 | http://localhost:7860 |
| stable-diffusion (alt) | sd-a1111 |
8081 | 8080 | http://localhost:8081 |
| whisper-server | whisper-server |
9002 | 9000 | http://localhost:9002 |
| crawl4ai | crawl4ai |
8860 | 11235 | http://localhost:8860 |
Get more explainations and details such as how to use the N8N Workflows in detail, troubleshooting and more by visting the URLs below:
- Jon Gaines - GainSec
- Cole Medin & the N8N-IO Team. Using Cole's docker-compose and start_services.py as a starting point. Local-AI-Packaged







