Skip to content

Commit 1e394dd

Browse files
ENGDOCS-2572 (#22466)
<!--Delete sections as needed --> ## Description - Add Windows NVIDIA GPU support for DMR - Add new `push` and `logs` functionality - Update enable Gordon process to reflect GUI updates - Known issues cleanup ## Related issues or tickets <!-- Related issues, pull requests, or Jira tickets --> ## Reviews <!-- Notes for reviewers here --> <!-- List applicable reviews (optionally @tag reviewers) --> - [ ] Technical review - [ ] Editorial review - [ ] Product review --------- Co-authored-by: Sarah Sanders <[email protected]>
1 parent ef5c24d commit 1e394dd

File tree

3 files changed

+49
-13
lines changed

3 files changed

+49
-13
lines changed

content/manuals/desktop/features/gordon/_index.md

+2
Original file line numberDiff line numberDiff line change
@@ -97,6 +97,8 @@ If you have concerns about data collection or usage, you can
9797

9898
9. Select **Apply & restart**.
9999

100+
You can also enable Ask Gordon from the **Ask Gordon** tab if you have selected the **Access experimental features** setting. Simply select the **Enable Ask Gordon** button, and then accept the Docker AI terms of service agreement.
101+
100102
## Using Ask Gordon
101103

102104
The primary interfaces to Docker's AI capabilities are through the **Ask

content/manuals/desktop/features/model-runner.md

+46-12
Original file line numberDiff line numberDiff line change
@@ -17,10 +17,15 @@ The Docker Model Runner plugin lets you:
1717
- [Pull models from Docker Hub](https://hub.docker.com/u/ai)
1818
- Run AI models directly from the command line
1919
- Manage local models (add, list, remove)
20-
- Interact with models using a submitted prompt or in chat mode
20+
- Interact with models using a submitted prompt or in chat mode in the CLI or Docker Desktop Dashboard
21+
- Push models to Docker Hub
2122

2223
Models are pulled from Docker Hub the first time they're used and stored locally. They're loaded into memory only at runtime when a request is made, and unloaded when not in use to optimize resources. Since models can be large, the initial pull may take some time — but after that, they're cached locally for faster access. You can interact with the model using [OpenAI-compatible APIs](#what-api-endpoints-are-available).
2324

25+
> [!TIP]
26+
>
27+
> Using Testcontainers? [Testcontainers for Java](https://java.testcontainers.org/modules/docker_model_runner/) and [Go](https://golang.testcontainers.org/modules/dockermodelrunner/) now support Docker Model Runner.
28+
2429
## Enable Docker Model Runner
2530

2631
1. Navigate to the **Features in development** tab in settings.
@@ -31,6 +36,8 @@ Models are pulled from Docker Hub the first time they're used and stored locally
3136
6. Navigate to **Features in development**.
3237
7. From the **Beta** tab, check the **Enable Docker Model Runner** setting.
3338

39+
You can now use the `docker model` command in the CLI and view and interact with your local models in the **Models** tab in the Docker Desktop Dashboard.
40+
3441
## Available commands
3542

3643
### Model runner status
@@ -84,6 +91,8 @@ Downloaded: 257.71 MB
8491
Model ai/smollm2 pulled successfully
8592
```
8693

94+
The models also display in the Docker Desktop Dashboard.
95+
8796
### List available models
8897

8998
Lists all models currently pulled to your local environment.
@@ -118,7 +127,7 @@ Hello! How can I assist you today?
118127
#### Interactive chat
119128

120129
```console
121-
docker model run ai/smollm2
130+
$ docker model run ai/smollm2
122131
```
123132

124133
Output:
@@ -131,6 +140,41 @@ Hi there! It's SmolLM, AI assistant. How can I help you today?
131140
Chat session ended.
132141
```
133142

143+
> [!TIP]
144+
>
145+
> You can also use chat mode in the Docker Desktop Dashboard when you select the model in the **Models** tab.
146+
147+
### Upload a model to Docker Hub
148+
149+
Use the following command to push your model to Docker Hub:
150+
151+
```console
152+
$ docker model push <namespace>/<model>
153+
```
154+
155+
### Tag a model
156+
157+
You can specify a particular version or variant of the model:
158+
159+
```console
160+
$ docker model tag
161+
```
162+
163+
If no tag is provided, Docker defaults to `latest`.
164+
165+
### View the logs
166+
167+
Fetch logs from Docker Model Runner to monitor activity or debug issues.
168+
169+
```console
170+
$ docker model logs
171+
```
172+
173+
The following flags are accepted:
174+
175+
- `-f`/`--follow`: View logs with real-time streaming
176+
- `--no-engines`: Exclude inference engine logs from the output
177+
134178
### Remove a model
135179

136180
Removes a downloaded model from your system.
@@ -308,20 +352,10 @@ Once linked, re-run the command.
308352

309353
Currently, Docker Model Runner doesn't include safeguards to prevent you from launching models that exceed their system’s available resources. Attempting to run a model that is too large for the host machine may result in severe slowdowns or render the system temporarily unusable. This issue is particularly common when running LLMs models without sufficient GPU memory or system RAM.
310354

311-
### `model run` drops into chat even if pull fails
312-
313-
If a model image fails to pull successfully, for example due to network issues or lack of disk space, the `docker model run` command will still drop you into the chat interface, even though the model isn’t actually available. This can lead to confusion, as the chat will not function correctly without a running model.
314-
315-
You can manually retry the `docker model pull` command to ensure the image is available before running it again.
316-
317355
### No consistent digest support in Model CLI
318356

319357
The Docker Model CLI currently lacks consistent support for specifying models by image digest. As a temporary workaround, you should refer to models by name instead of digest.
320358

321-
### Misleading pull progress after failed initial attempt
322-
323-
In some cases, if an initial `docker model pull` fails partway through, a subsequent successful pull may misleadingly report “0 bytes” downloaded even though data is being fetched in the background. This can give the impression that nothing is happening, when in fact the model is being retrieved. Despite the incorrect progress output, the pull typically completes as expected.
324-
325359
## Share feedback
326360

327361
Thanks for trying out Docker Model Runner. Give feedback or report any bugs you may find through the **Give feedback** link next to the **Enable Docker Model Runner** setting.

data/summary.yaml

+1-1
Original file line numberDiff line numberDiff line change
@@ -147,7 +147,7 @@ Docker GitHub Copilot:
147147
Docker Model Runner:
148148
availability: Beta
149149
requires: Docker Desktop 4.40 and later
150-
for: Docker Desktop for Mac with Apple Silicon
150+
for: Docker Desktop for Mac with Apple Silicon or Windows with NVIDIA GPUs
151151
Docker Projects:
152152
availability: Beta
153153
Docker Init:

0 commit comments

Comments
 (0)