You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<!--Delete sections as needed -->
## Description
- Add Windows NVIDIA GPU support for DMR
- Add new `push` and `logs` functionality
- Update enable Gordon process to reflect GUI updates
- Known issues cleanup
## Related issues or tickets
<!-- Related issues, pull requests, or Jira tickets -->
## Reviews
<!-- Notes for reviewers here -->
<!-- List applicable reviews (optionally @tag reviewers) -->
- [ ] Technical review
- [ ] Editorial review
- [ ] Product review
---------
Co-authored-by: Sarah Sanders <[email protected]>
Copy file name to clipboardExpand all lines: content/manuals/desktop/features/gordon/_index.md
+2
Original file line number
Diff line number
Diff line change
@@ -97,6 +97,8 @@ If you have concerns about data collection or usage, you can
97
97
98
98
9. Select **Apply & restart**.
99
99
100
+
You can also enable Ask Gordon from the **Ask Gordon** tab if you have selected the **Access experimental features** setting. Simply select the **Enable Ask Gordon** button, and then accept the Docker AI terms of service agreement.
101
+
100
102
## Using Ask Gordon
101
103
102
104
The primary interfaces to Docker's AI capabilities are through the **Ask
Copy file name to clipboardExpand all lines: content/manuals/desktop/features/model-runner.md
+46-12
Original file line number
Diff line number
Diff line change
@@ -17,10 +17,15 @@ The Docker Model Runner plugin lets you:
17
17
-[Pull models from Docker Hub](https://hub.docker.com/u/ai)
18
18
- Run AI models directly from the command line
19
19
- Manage local models (add, list, remove)
20
-
- Interact with models using a submitted prompt or in chat mode
20
+
- Interact with models using a submitted prompt or in chat mode in the CLI or Docker Desktop Dashboard
21
+
- Push models to Docker Hub
21
22
22
23
Models are pulled from Docker Hub the first time they're used and stored locally. They're loaded into memory only at runtime when a request is made, and unloaded when not in use to optimize resources. Since models can be large, the initial pull may take some time — but after that, they're cached locally for faster access. You can interact with the model using [OpenAI-compatible APIs](#what-api-endpoints-are-available).
23
24
25
+
> [!TIP]
26
+
>
27
+
> Using Testcontainers? [Testcontainers for Java](https://java.testcontainers.org/modules/docker_model_runner/) and [Go](https://golang.testcontainers.org/modules/dockermodelrunner/) now support Docker Model Runner.
28
+
24
29
## Enable Docker Model Runner
25
30
26
31
1. Navigate to the **Features in development** tab in settings.
@@ -31,6 +36,8 @@ Models are pulled from Docker Hub the first time they're used and stored locally
31
36
6. Navigate to **Features in development**.
32
37
7. From the **Beta** tab, check the **Enable Docker Model Runner** setting.
33
38
39
+
You can now use the `docker model` command in the CLI and view and interact with your local models in the **Models** tab in the Docker Desktop Dashboard.
40
+
34
41
## Available commands
35
42
36
43
### Model runner status
@@ -84,6 +91,8 @@ Downloaded: 257.71 MB
84
91
Model ai/smollm2 pulled successfully
85
92
```
86
93
94
+
The models also display in the Docker Desktop Dashboard.
95
+
87
96
### List available models
88
97
89
98
Lists all models currently pulled to your local environment.
@@ -118,7 +127,7 @@ Hello! How can I assist you today?
118
127
#### Interactive chat
119
128
120
129
```console
121
-
docker model run ai/smollm2
130
+
$ docker model run ai/smollm2
122
131
```
123
132
124
133
Output:
@@ -131,6 +140,41 @@ Hi there! It's SmolLM, AI assistant. How can I help you today?
131
140
Chat session ended.
132
141
```
133
142
143
+
> [!TIP]
144
+
>
145
+
> You can also use chat mode in the Docker Desktop Dashboard when you select the model in the **Models** tab.
146
+
147
+
### Upload a model to Docker Hub
148
+
149
+
Use the following command to push your model to Docker Hub:
150
+
151
+
```console
152
+
$ docker model push <namespace>/<model>
153
+
```
154
+
155
+
### Tag a model
156
+
157
+
You can specify a particular version or variant of the model:
158
+
159
+
```console
160
+
$ docker model tag
161
+
```
162
+
163
+
If no tag is provided, Docker defaults to `latest`.
164
+
165
+
### View the logs
166
+
167
+
Fetch logs from Docker Model Runner to monitor activity or debug issues.
168
+
169
+
```console
170
+
$ docker model logs
171
+
```
172
+
173
+
The following flags are accepted:
174
+
175
+
-`-f`/`--follow`: View logs with real-time streaming
176
+
-`--no-engines`: Exclude inference engine logs from the output
177
+
134
178
### Remove a model
135
179
136
180
Removes a downloaded model from your system.
@@ -308,20 +352,10 @@ Once linked, re-run the command.
308
352
309
353
Currently, Docker Model Runner doesn't include safeguards to prevent you from launching models that exceed their system’s available resources. Attempting to run a model that is too large for the host machine may result in severe slowdowns or render the system temporarily unusable. This issue is particularly common when running LLMs models without sufficient GPU memory or system RAM.
310
354
311
-
### `model run` drops into chat even if pull fails
312
-
313
-
If a model image fails to pull successfully, for example due to network issues or lack of disk space, the `docker model run` command will still drop you into the chat interface, even though the model isn’t actually available. This can lead to confusion, as the chat will not function correctly without a running model.
314
-
315
-
You can manually retry the `docker model pull` command to ensure the image is available before running it again.
316
-
317
355
### No consistent digest support in Model CLI
318
356
319
357
The Docker Model CLI currently lacks consistent support for specifying models by image digest. As a temporary workaround, you should refer to models by name instead of digest.
320
358
321
-
### Misleading pull progress after failed initial attempt
322
-
323
-
In some cases, if an initial `docker model pull` fails partway through, a subsequent successful pull may misleadingly report “0 bytes” downloaded even though data is being fetched in the background. This can give the impression that nothing is happening, when in fact the model is being retrieved. Despite the incorrect progress output, the pull typically completes as expected.
324
-
325
359
## Share feedback
326
360
327
361
Thanks for trying out Docker Model Runner. Give feedback or report any bugs you may find through the **Give feedback** link next to the **Enable Docker Model Runner** setting.
0 commit comments