Summary
On nemoclaw v0.0.13, I could not complete a clean/supported setup that uses OpenAI/ChatGPT OAuth as NemoClaw's inference backend, even though the overall NemoClaw/OpenClaw stack strongly suggests this should be possible in principle.
What ultimately worked was a manual workaround:
- install the host
openclaw CLI separately
- complete OpenAI OAuth there
- expose host OpenClaw as an OpenAI-compatible gateway
- point OpenShell/NemoClaw's
compatible-endpoint provider at that host gateway
So the containerized NemoClaw/OpenShell/OpenClaw runtime can work with OpenAI OAuth indirectly, but there does not appear to be a first-class NemoClaw path for it.
Environment
- OS: Linux Mint 22.3
nemoclaw v0.0.13
openshell 0.0.26
- Host
openclaw 2026.4.10
- Docker installed and working
- No GPU in this VM
What I expected
Because NemoClaw bundles OpenClaw/OpenShell and advertises OpenClaw-based secure sandboxing, I expected one of these to work cleanly:
- a native NemoClaw/OpenShell provider option for OpenClaw/OpenAI OAuth
- or a documented way to import/use existing OpenClaw auth profiles
- or a supported way to route NemoClaw inference through an OpenClaw gateway without custom patching and manual bridge setup
What actually happened
1. NemoClaw onboarding does not offer a clean OpenAI OAuth path
During nemoclaw onboard, the relevant provider choices are API-oriented (OpenAI, Other OpenAI-compatible endpoint, etc.).
That does not appear to support native OpenAI/ChatGPT OAuth directly.
2. Host OpenClaw OAuth worked only after local patching
I had to install the host openclaw CLI separately and use its OAuth flow.
That initially failed because the shipped OpenClaw OAuth flow requested invalid scopes. I filed that upstream here:
After patching the installed OpenClaw CLI locally, OAuth succeeded.
3. NemoClaw/OpenShell could only use OAuth indirectly via a manual bridge
To make NemoClaw actually work with that OAuth-backed model path, I had to:
- run host
openclaw gateway
- configure it with token auth and non-loopback bind
- disable its control UI for bridge use
- point OpenShell provider
compatible-endpoint at http://host.docker.internal:18889/v1
- raise OpenShell inference timeout from
60s to 180s
Only after that did the sandboxed NemoClaw path succeed via:
- host gateway:
http://127.0.0.1:18889/v1/chat/completions
- sandbox route:
https://inference.local/v1/chat/completions
Why I think this is a NemoClaw issue too
The OpenClaw scope bug is upstream to OpenClaw, but NemoClaw still seems to have a product/integration gap here:
- no first-class OpenAI OAuth onboarding path
- no documented host OpenClaw gateway bridge workflow
- no import/reuse of host OpenClaw auth state
- default OpenShell inference timeout was too short for this routed OAuth-backed path
So today, a user trying to use ChatGPT/OpenAI OAuth with NemoClaw ends up in an awkward place where:
- OpenClaw can authenticate
- NemoClaw can sandbox
- but the official path between those two is missing or undocumented
Reproduction (high level)
- Install Docker and
nemoclaw
- Run
nemoclaw onboard
- Try to choose a provider path that uses OpenAI/ChatGPT OAuth
- Observe there is no clean first-class option
- Install host
openclaw separately and complete OAuth there
- Try to make NemoClaw use that auth state directly
- Observe that this does not work without a manual OpenAI-compatible bridge/gateway setup
Workaround that eventually worked
- install host
openclaw
- patch host OpenClaw OAuth scopes locally so login succeeds
- run host OpenClaw as an OpenAI-compatible gateway
- configure NemoClaw/OpenShell
compatible-endpoint to use that gateway
- increase OpenShell inference timeout to
180s
Request
Please consider one or more of these improvements:
- Add a first-class NemoClaw/OpenShell path for OpenClaw/OpenAI OAuth.
- Document a supported bridge workflow using host OpenClaw gateway.
- Support importing or referencing host OpenClaw auth profiles from NemoClaw/OpenShell.
- Revisit default routed inference timeouts for slower OAuth-backed/gateway-backed providers.
- Clarify in docs which provider/auth combinations are officially supported today.
Notes
I can provide more exact local config details if helpful, but I wanted to keep this report focused on the user-visible integration gap rather than my local patches.
Summary
On
nemoclaw v0.0.13, I could not complete a clean/supported setup that uses OpenAI/ChatGPT OAuth as NemoClaw's inference backend, even though the overall NemoClaw/OpenClaw stack strongly suggests this should be possible in principle.What ultimately worked was a manual workaround:
openclawCLI separatelycompatible-endpointprovider at that host gatewaySo the containerized NemoClaw/OpenShell/OpenClaw runtime can work with OpenAI OAuth indirectly, but there does not appear to be a first-class NemoClaw path for it.
Environment
nemoclaw v0.0.13openshell 0.0.26openclaw 2026.4.10What I expected
Because NemoClaw bundles OpenClaw/OpenShell and advertises OpenClaw-based secure sandboxing, I expected one of these to work cleanly:
What actually happened
1. NemoClaw onboarding does not offer a clean OpenAI OAuth path
During
nemoclaw onboard, the relevant provider choices are API-oriented (OpenAI,Other OpenAI-compatible endpoint, etc.).That does not appear to support native OpenAI/ChatGPT OAuth directly.
2. Host OpenClaw OAuth worked only after local patching
I had to install the host
openclawCLI separately and use its OAuth flow.That initially failed because the shipped OpenClaw OAuth flow requested invalid scopes. I filed that upstream here:
After patching the installed OpenClaw CLI locally, OAuth succeeded.
3. NemoClaw/OpenShell could only use OAuth indirectly via a manual bridge
To make NemoClaw actually work with that OAuth-backed model path, I had to:
openclaw gatewaycompatible-endpointathttp://host.docker.internal:18889/v160sto180sOnly after that did the sandboxed NemoClaw path succeed via:
http://127.0.0.1:18889/v1/chat/completionshttps://inference.local/v1/chat/completionsWhy I think this is a NemoClaw issue too
The OpenClaw scope bug is upstream to OpenClaw, but NemoClaw still seems to have a product/integration gap here:
So today, a user trying to use ChatGPT/OpenAI OAuth with NemoClaw ends up in an awkward place where:
Reproduction (high level)
nemoclawnemoclaw onboardopenclawseparately and complete OAuth thereWorkaround that eventually worked
openclawcompatible-endpointto use that gateway180sRequest
Please consider one or more of these improvements:
Notes
I can provide more exact local config details if helpful, but I wanted to keep this report focused on the user-visible integration gap rather than my local patches.