-
Notifications
You must be signed in to change notification settings - Fork 2.5k
Description
When attempting to use browser-use/web-ui with a locally running Ollama instance (using llama3.1:latest), the web-ui application consistently defaults to initializing Provider=openai (and subsequently fails due to a missing API key), even though the Docker container is correctly configured for Ollama.
Expected Behavior:
The web-ui application should initialize the LLM using Provider=ollama and attempt to connect to the local Ollama service, as specified by the environment variables.
Observed Behavior:
Both web-ui and ollama Docker containers start up.
ollama-llm container is consistently reported as (health: starting) or (unhealthy) by docker compose ps, even though its API is functional and has models loaded.
The web-ui application successfully launches its web interface at http://localhost:7788.
When a task is submitted (e.g., "Tell me a short joke"), the web-ui logs show:
INFO Initializing LLM: Provider=openai, Model=gpt-4o, Temp=0.6
Followed by:
ERROR Failed to initialize LLM: 💥 OpenAI API key not found!
And a traceback ending in:
AttributeError: 'NoneType' object has no attribute '_verified_api_keys'
The web-ui does not attempt to initialize Ollama, or if it does, it fails silently and immediately falls back to OpenAI, without logging an explicit Ollama connection error from its main LLM initialization path.
Environment:
Operating System: macOS Y15.5 (24F74) on Apple M1 Pro
Docker Desktop Version: 4.22.1 (118664)
browser-use/web-ui Version: Cloned from https://github.com/browser-use/web-ui.git on [Current Date, e.g., July 3, 2025] (assumed latest main branch).
Ollama Version: 0.9.4 (as per ollama-llm container logs).
Ollama Model Used: llama3.1:latest
Condensed Steps to Reproduce:
Full Cleanup & Fresh Clone:
Bash
cd [Your web-ui project parent directory] # e.g., ~/Repos/webui/
docker compose -f web-ui/docker-compose.yml down --volumes --rmi all # Ensure web-ui is clean
docker stop $(docker ps -aq --filter ancestor=ollama/ollama) # Stop any separate Ollama
docker rm $(docker ps -aq --filter ancestor=ollama/ollama)
docker volume rm ollama_data # Ensure Ollama volume is clean
rm -rf web-ui # Remove old web-ui folder
git clone https://github.com/browser-use/web-ui.git
cd web-ui
Setup docker-compose.yml (Unified Services):
Create/open docker-compose.yml in the web-ui directory.
Ensure it contains both the browser-use-webui and ollama services, configured to use ollama_data volume and depends_on: ollama: condition: service_started. The web-ui service's environment block should reference LLM_PROVIDER=${LLM_PROVIDER:-openai}, OLLAMA_ENDPOINT=${OLLAMA_ENDPOINT:-http://ollama:11434}, and OLLAMA_MODEL=${OLLAMA_MODEL:-llama3.1:latest}.
Setup .env (Ollama Configuration):
Create/open .env in the web-ui directory.
Set the following (ensure no OPENAI_API_KEY or DEFAULT_LLM are active):
Code snippet
LLM_PROVIDER=ollama
OLLAMA_BASE_URL=http://ollama:11434
OLLAMA_MODEL=llama3.1:latest
BROWSER_USE_LOGGING_LEVEL=debug
Start Services & Download Model:
Bash
docker compose up --build -d
Wait ~30-60 seconds for Ollama to start
docker exec -it ollama-llm ollama run llama3.1 # Downloads the model
Type 'bye' and Enter to exit chat after download completes
Access Web UI & Submit Task:
Open http://localhost:7788 in your browser.
Submit a simple task (e.g., "Tell me a short joke").
Observe logs.
Crucial Diagnostic Information to Include (Exactly as before):
Full relevant web-ui container logs (showing Initializing LLM: Provider=openai and the crash).
docker compose ps output (showing Ollama (unhealthy) and web-ui (healthy)).
docker inspect output (confirming LLM_PROVIDER=ollama env var).
curl http://localhost:11434/api/tags output (confirming Ollama accessible with model).
Your exact docker-compose.yml file content.
Your exact .env file content.