This directory contains Docker Compose configuration for GPU-accelerated AI services used by Mycelia, including Whisper transcription and Ollama LLM inference.
- Image:
onerahmet/openai-whisper-asr-webservice:v1.9.1-gpu - Purpose: Audio transcription using OpenAI Whisper
- Image:
ollama/ollama:0.13.5 - Purpose: Local LLM inference with GPU acceleration
- Purpose: API gateway providing:
- OpenAI-compatible transcription endpoint
- Authentication
- Unified API access to both services
- Docker with GPU support (NVIDIA Docker runtime)
- NVIDIA GPU with CUDA support
Tested with
NVIDIA GeForce RTX 4090
CUDA 13.0
Driver Version: 580.95.05
# on your GPU machine
git clone https://github.com/mycelia-tech/mycelia.git
cd mycelia/gpu
docker compose up -d --build