Description
Ollama is detected because localhost:11434 responds, but onboarding later fails because containers cannot reach host.openshell.internal:11434.
Reproduction Steps
- Start from an Ubuntu machine with Docker CE installed and running.
- Install Ollama.
- Start Ollama in a host-only way so it binds to loopback, for example using the default system service or any mode that results in:
ss -ltnp | grep 11434
showing:
127.0.0.1:11434
- Verify that Ollama appears healthy from the host:
curl http://localhost:11434/api/tags
- Ensure at least one Ollama model exists, for example:
ollama pull nemotron-3-nano:30b
ollama list
- Run the NemoClaw installer:
curl -fsSL https://www.nvidia.com/nemoclaw.sh | bash
- During onboarding:
enter a valid sandbox name
choose Local Ollama
choose the available Ollama model
- Continue until onboarding reaches inference provider setup.
Environment
- Openshell: 0.0.11
- nemoclaw: 0.1.0 (main: dbfd78c)
- OS: Ubuntu
- Container runtime: Docker CE
- GPU: Available
- Ollama: Installed
- Ollama models: None (ollama list is empty)
- NemoClaw install method: curl -fsSL https://www.nvidia.com/nemoclaw.sh | bash
Debug Output
Logs
Checklist
[NVB#6000934]
Description
Ollama is detected because localhost:11434 responds, but onboarding later fails because containers cannot reach host.openshell.internal:11434.
Reproduction Steps
ss -ltnp | grep 11434
showing:
127.0.0.1:11434
curl http://localhost:11434/api/tags
ollama pull nemotron-3-nano:30b
ollama list
curl -fsSL https://www.nvidia.com/nemoclaw.sh | bash
enter a valid sandbox name
choose Local Ollama
choose the available Ollama model
Environment
Debug Output
Logs
Checklist
[NVB#6000934]