diff --git a/container-toolkit/arch-overview.md b/container-toolkit/arch-overview.md
index 10835da5b..69735f161 100644
--- a/container-toolkit/arch-overview.md
+++ b/container-toolkit/arch-overview.md
@@ -78,7 +78,7 @@ This component is included in the `nvidia-container-toolkit` package.
This component includes an executable that implements the interface required by a `runC` `prestart` hook. This script is invoked by `runC`
after a container has been created, but before it has been started, and is given access to the `config.json` associated with the container
-(e.g. this [config.json](https://github.com/opencontainers/runtime-spec/blob/master/config.md#configuration-schema-example=) ). It then takes
+(such as this [config.json](https://github.com/opencontainers/runtime-spec/blob/master/config.md#configuration-schema-example=) ). It then takes
information contained in the `config.json` and uses it to invoke the `nvidia-container-cli` CLI with an appropriate set of flags. One of the
most important flags being which specific GPU devices should be injected into the container.
@@ -111,7 +111,7 @@ To use Kubernetes with Docker, you need to configure the Docker `daemon.json` to
a reference to the NVIDIA Container Runtime and set this runtime as the default. The NVIDIA Container Toolkit contains a utility to update this file
as highlighted in the `docker`-specific installation instructions.
-See the {doc}`install-guide` for more information on installing the NVIDIA Container Toolkit on various Linux distributions.
+Refer to the {doc}`install-guide` for more information on installing the NVIDIA Container Toolkit on various Linux distributions.
### Package Repository
@@ -130,7 +130,7 @@ For the different components:
:::{note}
As of the release of version `1.6.0` of the NVIDIA Container Toolkit the packages for all components are
-published to the `libnvidia-container` `repository ` listed above. For older package versions please see the documentation archives.
+published to the `libnvidia-container` `repository ` listed above. For older package versions refer to the documentation archives.
:::
Releases of the software are also hosted on `experimental` branch of the repository and are graduated to `stable` after test/validation. To get access to the latest
diff --git a/container-toolkit/cdi-support.md b/container-toolkit/cdi-support.md
index 105c0cdff..de542d36c 100644
--- a/container-toolkit/cdi-support.md
+++ b/container-toolkit/cdi-support.md
@@ -1,6 +1,7 @@
% Date: November 11 2022
-% Author: elezar
+% Author: elezar (elezar@nvidia.com)
+% Author: ArangoGutierrez (eduardoa@nvidia.com)
% headings (h1/h2/h3/h4/h5) are # * = -
@@ -29,54 +30,134 @@ CDI also improves the compatibility of the NVIDIA container stack with certain f
- You installed an NVIDIA GPU Driver.
-### Procedure
+### Automatic CDI Specification Generation
-Two common locations for CDI specifications are `/etc/cdi/` and `/var/run/cdi/`.
-The contents of the `/var/run/cdi/` directory are cleared on boot.
+As of NVIDIA Container Toolkit `v1.18.0`, the CDI specification is automatically generated and updated by a systemd service called `nvidia-cdi-refresh`. This service:
-However, the path to create and use can depend on the container engine that you use.
+- Automatically generates the CDI specification at `/var/run/cdi/nvidia.yaml` when:
+ - The NVIDIA Container Toolkit is installed or upgraded
+ - The NVIDIA GPU drivers are installed or upgraded
+ - The system is rebooted
-1. Generate the CDI specification file:
+This ensures that the CDI specifications are up to date for the current driver
+and device configuration and that CDI Devices defined in these speciciations are
+available when using native CDI support in container engines such as Docker or Podman.
- ```console
- $ sudo nvidia-ctk cdi generate --output=/etc/cdi/nvidia.yaml
- ```
-
- The sample command uses `sudo` to ensure that the file at `/etc/cdi/nvidia.yaml` is created.
- You can omit the `--output` argument to print the generated specification to `STDOUT`.
+Running the following command will give a list of availble CDI Devices:
+```console
+nvidia-ctk cdi list
+```
- *Example Output*
+#### Known limitations
+The `nvidia-cdi-refresh` service does not currently handle the following situations:
- ```output
- INFO[0000] Auto-detected mode as "nvml"
- INFO[0000] Selecting /dev/nvidia0 as /dev/nvidia0
- INFO[0000] Selecting /dev/dri/card1 as /dev/dri/card1
- INFO[0000] Selecting /dev/dri/renderD128 as /dev/dri/renderD128
- INFO[0000] Using driver version xxx.xxx.xx
- ...
- ```
+- The removal of NVIDIA GPU drivers
+- The reconfiguration of MIG devices
-1. (Optional) Check the names of the generated devices:
+For these scenarios, the regeneration of CDI specifications must be [manually triggered](#manual-cdi-specification-generation).
- ```console
- $ nvidia-ctk cdi list
- ```
+#### Customizing the Automatic CDI Refresh Service
+The behavior of the `nvidia-cdi-refresh` service can be customized by adding
+environment variables to `/etc/nvidia-container-toolkit/cdi-refresh.env` to
+affect the behavior of the `nvidia-ctk cdi generate` command.
- The following example output is for a machine with a single GPU that does not support MIG.
+As an example, to enable debug logging the configuration file should be updated
+as follows:
+```bash
+# /etc/nvidia-container-toolkit/cdi-refresh.env
+NVIDIA_CTK_DEBUG=1
+```
- ```output
- INFO[0000] Found 9 CDI devices
- nvidia.com/gpu=all
- nvidia.com/gpu=0
- ```
+For a complete list of available environment variables, run `nvidia-ctk cdi generate --help` to see the command's documentation.
```{important}
-You must generate a new CDI specification after any of the following changes:
+Modifications to the environment file required a systemd reload and restarting the
+service to take effect
+```
+
+```console
+$ sudo systemctl daemon-reload
+$ sudo systemctl restart nvidia-cdi-refresh.service
+```
+
+#### Managing the CDI Refresh Service
+
+The `nvidia-cdi-refresh` service consists of two systemd units:
+
+- `nvidia-cdi-refresh.path`: Monitors for changes to the system and triggers the service.
+- `nvidia-cdi-refresh.service`: Generates the CDI specifications for all available devices based on
+ the default configuration and any overrides in the environment file.
+
+These services can be managed using standard systemd commands.
+
+When working as expected, the `nvidia-cdi-refresh.path` service will be enabled and active, and the
+`nvidia-cdi-refresh.service` will be enabled and have run at least once. For example:
+
+```console
+$ sudo systemctl status nvidia-cdi-refresh.path
+● nvidia-cdi-refresh.path - Trigger CDI refresh on NVIDIA driver install / uninstall events
+ Loaded: loaded (/etc/systemd/system/nvidia-cdi-refresh.path; enabled; preset: enabled)
+ Active: active (waiting) since Fri 2025-06-27 06:04:54 EDT; 1h 47min ago
+ Triggers: ● nvidia-cdi-refresh.service
+```
+
+```console
+$ sudo systemctl status nvidia-cdi-refresh.service
+○ nvidia-cdi-refresh.service - Refresh NVIDIA CDI specification file
+ Loaded: loaded (/etc/systemd/system/nvidia-cdi-refresh.service; enabled; preset: enabled)
+ Active: inactive (dead) since Fri 2025-06-27 07:17:26 EDT; 34min ago
+TriggeredBy: ● nvidia-cdi-refresh.path
+ Process: 1317511 ExecStart=/usr/bin/nvidia-ctk cdi generate --output=/var/run/cdi/nvidia.yaml (code=exited, status=0/SUCCESS)
+ Main PID: 1317511 (code=exited, status=0/SUCCESS)
+ CPU: 562ms
+...
+```
+
+If these are not enabled as expected, they can be enabled by running:
+
+```console
+$ sudo systemctl enable --now nvidia-cdi-refresh.path
+$ sudo systemctl enable --now nvidia-cdi-refresh.service
+```
+
+#### Troubleshooting CDI Specification Generation and Resolution
+
+If CDI specifications for available devices are not generated / updated as expected, it is
+recommended that the logs for the `nvidia-cdi-refresh.service` be checked. This can be
+done by running:
+
+```console
+$ sudo journalctl -u nvidia-cdi-refresh.service
+```
+
+In most cases, restarting the service should be sufficient to trigger the (re)generation
+of CDI specifications:
+
+```console
+$ sudo systemctl restart nvidia-cdi-refresh.service
+```
-- You change the device or CUDA driver configuration.
-- You use a location such as `/var/run/cdi` that is cleared on boot.
+Running:
-A configuration change can occur when MIG devices are created or removed, or when the driver is upgraded.
+```console
+$ nvidia-ctk --debug cdi list
+```
+will show a list of available CDI Devices as well as any errors that may have
+occurred when loading CDI Specifications from `/etc/cdi` or `/var/run/cdi`.
+
+### Manual CDI Specification Generation
+
+As of the NVIDIA Container Toolkit `v1.18.0` the recommended mechanism to regenerate CDI specifications is to restart the `nvidia-cdi-refresh.service`:
+
+```console
+$ sudo systemctl restart nvidia-cdi-refresh.service
+```
+
+If this does not work, or more flexibility is required, the `nvidia-ctk cdi generate` command
+can be used directly:
+
+```console
+$ sudo nvidia-ctk cdi generate --output=/var/run/cdi/nvidia.yaml
```
## Running a Workload with CDI
diff --git a/container-toolkit/docker-specialized.md b/container-toolkit/docker-specialized.md
index 77ab31bf6..fab8d6674 100644
--- a/container-toolkit/docker-specialized.md
+++ b/container-toolkit/docker-specialized.md
@@ -206,7 +206,7 @@ The supported constraints are provided below:
- constraint on the compute architectures of the selected GPUs.
* - ``brand``
- - constraint on the brand of the selected GPUs (e.g. GeForce, Tesla, GRID).
+ - constraint on the brand of the selected GPUs (such as GeForce, Tesla, GRID).
```
Multiple constraints can be expressed in a single environment variable: space-separated constraints are ORed,
diff --git a/container-toolkit/index.md b/container-toolkit/index.md
index 6c07e25c4..de4a35d14 100644
--- a/container-toolkit/index.md
+++ b/container-toolkit/index.md
@@ -35,5 +35,5 @@ The NVIDIA Container Toolkit is a collection of libraries and utilities enabling
## License
The NVIDIA Container Toolkit (and all included components) is licensed under [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) and
-contributions are accepted with a Developer Certificate of Origin (DCO). See the [contributing](https://github.com/NVIDIA/nvidia-container-toolkit/blob/master/CONTRIBUTING.md) document for
+contributions are accepted with a Developer Certificate of Origin (DCO). Refer to the [contributing](https://github.com/NVIDIA/nvidia-container-toolkit/blob/master/CONTRIBUTING.md) document for
more information.
diff --git a/container-toolkit/install-guide.md b/container-toolkit/install-guide.md
index b28fc71d1..3c6b836b4 100644
--- a/container-toolkit/install-guide.md
+++ b/container-toolkit/install-guide.md
@@ -21,7 +21,7 @@ Alternatively, you can install the driver by [downloading](https://www.nvidia.co
```{note}
There is a [known issue](troubleshooting.md#containers-losing-access-to-gpus-with-error-failed-to-initialize-nvml-unknown-error) on systems
where `systemd` cgroup drivers are used that cause containers to lose access to requested GPUs when
-`systemctl daemon reload` is run. Please see the troubleshooting documentation for more information.
+`systemctl daemon reload` is run. Refer to the troubleshooting documentation for more information.
```
(installing-with-apt)=
@@ -31,6 +31,12 @@ where `systemd` cgroup drivers are used that cause containers to lose access to
```{note}
These instructions [should work](./supported-platforms.md) for any Debian-derived distribution.
```
+1. Install the prerequisites for the instructions below:
+ ```console
+ $ sudo apt-get update && apt-get install -y --no-install-recommends \
+ curl \
+ gnupg2
+ ```
1. Configure the production repository:
@@ -78,6 +84,12 @@ where `systemd` cgroup drivers are used that cause containers to lose access to
These instructions [should work](./supported-platforms.md) for many RPM-based distributions.
```
+1. Install the prerequisites for the instructions below:
+ ```console
+ $ sudo dnf install -y \
+ curl
+ ```
+
1. Configure the production repository:
```console
@@ -186,8 +198,10 @@ follow these steps:
$ sudo nvidia-ctk runtime configure --runtime=containerd
```
- The `nvidia-ctk` command modifies the `/etc/containerd/config.toml` file on the host.
- The file is updated so that containerd can use the NVIDIA Container Runtime.
+ By default, the `nvidia-ctk` command creates a `/etc/containerd/conf.d/99-nvidia.toml`
+ drop-in config file and modifies (or creates) the `/etc/containerd/config.toml` file
+ to ensure that the `imports` config option is updated accordingly. The drop-in file
+ ensures that containerd can use the NVIDIA Container Runtime.
1. Restart containerd:
@@ -201,7 +215,7 @@ No additional configuration is needed.
You can just run `nerdctl run --gpus=all`, with root or without root.
You do not need to run the `nvidia-ctk` command mentioned above for Kubernetes.
-See also the [nerdctl documentation](https://github.com/containerd/nerdctl/blob/main/docs/gpu.md).
+Refer to the [nerdctl documentation](https://github.com/containerd/nerdctl/blob/main/docs/gpu.md) for more information.
### Configuring CRI-O
@@ -211,8 +225,8 @@ See also the [nerdctl documentation](https://github.com/containerd/nerdctl/blob/
$ sudo nvidia-ctk runtime configure --runtime=crio
```
- The `nvidia-ctk` command modifies the `/etc/crio/crio.conf` file on the host.
- The file is updated so that CRI-O can use the NVIDIA Container Runtime.
+ By default, the `nvidia-ctk` command creates a `/etc/crio/conf.d/99-nvidia.toml`
+ drop-in config file. The drop-in file ensures that CRI-O can use the NVIDIA Container Runtime.
1. Restart the CRI-O daemon:
@@ -229,7 +243,6 @@ See also the [nerdctl documentation](https://github.com/containerd/nerdctl/blob/
For Podman, NVIDIA recommends using [CDI](./cdi-support.md) for accessing NVIDIA devices in containers.
-
## Next Steps
- [](./sample-workload.md)
\ No newline at end of file
diff --git a/container-toolkit/release-notes.md b/container-toolkit/release-notes.md
index e3ab5bfbd..3b0d72594 100644
--- a/container-toolkit/release-notes.md
+++ b/container-toolkit/release-notes.md
@@ -8,6 +8,65 @@
This document describes the new features, improvements, fixes and known issues for the NVIDIA Container Toolkit.
+## NVIDIA Container Toolkit 1.18.0
+
+This release of the NVIDIA Container Toolkit `v1.18.0` is feature release with the following high-level changes:
+- The default mode of the NVIDIA Container Runtime has been updated to make use
+ of a just-in-time-generated CDI specification instead of defaulting to the `legacy` mode.
+- Added a systemd unit to generate CDI specifications for available devices automatically. This allows
+ native CDI support in container engines such as Docker and Podman to be used without additional steps.
+
+### Deprecation Notices
+- The OCI `hook`-based config mode for cri-o is now deprecated. Updating the cri-o config through a
+ drop-in config file is now the recommended mechanism to configure this container engine.
+- The `chmod` CDI hook is deprecated. It was implemented as a workaround for `crun` issue that has
+ been resolved for some time now. The inclusion of this hook can still be
+ triggered when explicitly generating CDI specifications.
+- The `legacy` mode of the NVIDIA Container Runtime is deprecated. It is no longer the _default_ mode
+ when using the `nvidia-container-runtime` is used. It is still supported for use cases where it is
+ _required_.
+
+### Packaging Changes
+- The Container Toolkit now requires that the version of the `libnvidia-container*` libraries _exactly_ match the version of the `nvidia-container-toolkit*` packages.
+ version of the `nvidia-container-toolkit*` packages.
+- This release no longer publishes `ppc64le` packages.
+
+### Fixes and Features
+- Added automatic generation of CDI specifications for available devices.
+- Update the behaviour of the `update-ldcache` hook to ALWAYS create an ldcache in the container
+ even if ldconfig is not present in the container being run.
+- Disable the injection of the `chmod` CDI hook by default. The inclusion of this hook can still be
+ triggered when explicitly generating CDI specifications.
+- The generated CDI specification will include `.so` (development) symlinks for ALL driver libraries
+ if these exist on the host.
+- The `nvidia-ctk cdi generate` command loads select settings from the `config.toml` file when generating
+ CDI specifications.
+- Allow CDI hooks to be explicitly disabled or enabled when using the `nvidia-ctk cdi generate` command
+ or the `nvcdi` API.
+
+#### Enhancements to libnvidia-container
+- Add clock_gettime to the set of allowed syscalls under seccomp. This allows ldconfig from distributions
+ such as Arch Linux to be run in the container.
+
+#### Enhancements to container-toolkit Container Images
+- Switched to a single image (based on a distroless base image) for all target plaforms.
+- Default to use drop-in config files to add `nvidia` runtime definitions to containerd and cri-o.
+
+### Included Packages
+
+The following packages are included:
+
+- `nvidia-container-toolkit 1.18.0`
+- `nvidia-container-toolkit-base 1.18.0`
+- `libnvidia-container-tools 1.18.0`
+- `libnvidia-container1 1.18.0`
+
+The following `container-toolkit` containers are included:
+
+- `nvcr.io/nvidia/k8s/container-toolkit:v1.18.0`
+- `nvcr.io/nvidia/k8s/container-toolkit:v1.18.0-packaging`
+
+
## NVIDIA Container Toolkit 1.17.8
This release of the NVIDIA Container Toolkit `v1.17.8` is a bugfix release.
@@ -44,7 +103,7 @@ This release of the NVIDIA Container Toolkit `v1.17.7` is a bugfix and minor fea
### Fixes and Features
- Fixed mode detection on Thor-based systems. With this change, the runtime mode correctly resolves to `csv`.
- Fixed the resolution of libraries in the LDCache on ARM. This fixes CDI spec generation on ARM-based systems using NVML.
-- Added a `nvidia-container-runtime-modes.legacy.cuda-compat-mode` option to provide finer control of how CUDA Forward Compatibility is handled. The default value (`ldconfig`) fixes CUDA Compatibility Support in cases where only the NVIDIA Container Runtime Hook is used (e.g. the Docker `--gpus` command line flag).
+- Added a `nvidia-container-runtime-modes.legacy.cuda-compat-mode` option to provide finer control of how CUDA Forward Compatibility is handled. The default value (`ldconfig`) fixes CUDA Compatibility Support in cases where only the NVIDIA Container Runtime Hook is used (such as the Docker `--gpus` command line flag).
- Improved the `update-ldcache` hook to run in isolated namespaces. This improves hook security.
@@ -255,7 +314,7 @@ The following packages are included:
- `libnvidia-container-tools 1.17.2`
- `libnvidia-container1 1.17.2`
-The following `container-toolkit` conatiners are included:
+The following `container-toolkit` containers are included:
- `nvcr.io/nvidia/k8s/container-toolkit:v1.17.2-ubi8`
- `nvcr.io/nvidia/k8s/container-toolkit:v1.17.2-ubuntu20.04` (also as `nvcr.io/nvidia/k8s/container-toolkit:v1.17.2`)
@@ -327,7 +386,7 @@ The following `container-toolkit` conatiners are included:
- Added support for requesting IMEX channels as volume mounts.
- Added a `disable-imex-channel-creation` feature flag to disable the creation of IMEX channel device nodes when creating a container.
- Added IMEX channel device nodes to the CDI specifications in `management` mode.
-- Added the creation of select driver symlinks (e.g. `libcuda.so`) in CDI specification generation to match the behavior in the `legacy` mode.
+- Added the creation of select driver symlinks (such as `libcuda.so`) in CDI specification generation to match the behavior in the `legacy` mode.
### Enhancements to container-toolkit Container Images
@@ -370,7 +429,7 @@ The following `container-toolkit` conatiners are included:
### Fixes and Features
- Excluded `libnvidia-allocator` from graphics mounts. This fixes a bug that leaks mounts when a container is started with bi-directional mount propagation.
-- Used empty string for default `runtime-config-override`. This removes a redundant warning for runtimes (e.g. Docker) where this is not applicable.
+- Used empty string for default `runtime-config-override`. This removes a redundant warning for runtimes (such as Docker) where this is not applicable.
### Enhancements to container-toolkit Container Images
@@ -807,7 +866,7 @@ The following `container-toolkit` containers are included:
### Fixes and Features
-- Fixed a bug which would cause the update of an ldcache in the container to fail for images that do no use ldconfig (e.g. `busybox`).
+- Fixed a bug which would cause the update of an ldcache in the container to fail for images that do no use ldconfig (such as `busybox`).
- Fixed a bug where a failure to determine the CUDA driver version would cause the container to fail to start if `NVIDIA_DRIVER_CAPABILITIES` included `graphics` or `display` on Debian systems.
- Fixed CDI specification generation on Debian systems.
@@ -1001,7 +1060,7 @@ Note that this release does not include an update to `nvidia-docker2` and is com
- Add `cdi` mode to NVIDIA Container Runtime
- Add discovery of GPUDirect Storage (`nvidia-fs*`) devices if the `NVIDIA_GDS` environment variable of the container is set to `enabled`
- Add discovery of MOFED Infiniband devices if the `NVIDIA_MOFED` environment variable of the container is set to `enabled`
-- Add `nvidia-ctk runtime configure` command to configure the Docker config file (e.g. `/etc/docker/daemon.json`) for use with the NVIDIA Container Runtime.
+- Add `nvidia-ctk runtime configure` command to configure the Docker config file (such as `/etc/docker/daemon.json`) for use with the NVIDIA Container Runtime.
#### specific to libnvidia-container
@@ -1086,7 +1145,7 @@ The following packages have also been updated to depend on `nvidia-container-too
- Bump `libtirpc` to `1.3.2`
- Fix bug when running host ldconfig using glibc compiled with a non-standard prefix
- Add `libcudadebugger.so` to list of compute libraries
-- \[WSL2\] Fix segmentation fault on WSL2s system with no adpaters present (e.g. `/dev/dxg` missing)
+- \[WSL2\] Fix segmentation fault on WSL2s system with no adpaters present (such as `/dev/dxg` missing)
- Ignore pending MIG mode when checking if a device is MIG enabled
- \[WSL2\] Fix bug where `/dev/dxg` is not mounted when `NVIDIA_DRIVER_CAPABILITIES` does not include "compute"
diff --git a/container-toolkit/sample-workload.md b/container-toolkit/sample-workload.md
index 3b19550a7..fe6be7444 100644
--- a/container-toolkit/sample-workload.md
+++ b/container-toolkit/sample-workload.md
@@ -21,7 +21,7 @@ you can verify your installation by running a sample workload.
## Running a Sample Workload with Podman
-After you install and configura the toolkit (including [generating a CDI specification](cdi-support.md)) and install an NVIDIA GPU Driver,
+After you install and configure the toolkit (including [generating a CDI specification](cdi-support.md)) and install an NVIDIA GPU Driver,
you can verify your installation by running a sample workload.
- Run a sample CUDA container:
diff --git a/container-toolkit/supported-platforms.md b/container-toolkit/supported-platforms.md
index af7401240..cc7efabee 100644
--- a/container-toolkit/supported-platforms.md
+++ b/container-toolkit/supported-platforms.md
@@ -22,17 +22,16 @@ Recent NVIDIA Container Toolkit releases are tested and expected to work on thes
| Ubuntu 24.04 | X | | X |
-## Please report issues
+## Report issues
Our qualification-testing procedures are constantly evolving and we might miss
-certain problems. Please
-[report](https://github.com/NVIDIA/nvidia-container-toolkit/issues) issues in
+certain problems. [Report](https://github.com/NVIDIA/nvidia-container-toolkit/issues) issues in
particular as they occur on a platform listed above.
## Other Linux distributions
-Releases may work on more platforms than indicated in the table above (e.g., on distribution versions older and newer than listed).
+Releases may work on more platforms than indicated in the table above (such as on distribution versions older and newer than listed).
Give things a try and we invite you to [report](https://github.com/NVIDIA/nvidia-container-toolkit/issues) any issue observed even if your Linux distribution is not listed.
----
diff --git a/container-toolkit/troubleshooting.md b/container-toolkit/troubleshooting.md
index c8588ddfd..813e0c80d 100644
--- a/container-toolkit/troubleshooting.md
+++ b/container-toolkit/troubleshooting.md
@@ -124,9 +124,9 @@ Review the SELinux policies on your system.
## Containers losing access to GPUs with error: "Failed to initialize NVML: Unknown Error"
-When using the NVIDIA Container Runtime Hook (i.e. the Docker `--gpus` flag or
+When using the NVIDIA Container Runtime Hook (that is, the Docker `--gpus` flag or
the NVIDIA Container Runtime in `legacy` mode) to inject requested GPUs and driver
-libraries into a container, the hook makes modifications, including setting up cgroup access, to the container without the low-level runtime (e.g. `runc`) being aware of these changes.
+libraries into a container, the hook makes modifications, including setting up cgroup access, to the container without the low-level runtime (such as `runc`) being aware of these changes.
The result is that updates to the container may remove access to the requested GPUs.
When the container loses access to the GPU, you will see the following error message from the console output:
diff --git a/container-toolkit/versions.json b/container-toolkit/versions.json
index aef68cb71..e75f6b450 100644
--- a/container-toolkit/versions.json
+++ b/container-toolkit/versions.json
@@ -1,7 +1,10 @@
{
- "latest": "1.17.8",
+ "latest": "1.18.0",
"versions":
[
+ {
+ "version": "1.18.0"
+ },
{
"version": "1.17.8"
},
diff --git a/container-toolkit/versions1.json b/container-toolkit/versions1.json
index bed065609..11387480a 100644
--- a/container-toolkit/versions1.json
+++ b/container-toolkit/versions1.json
@@ -1,6 +1,10 @@
[
{
"preferred": "true",
+ "url": "../1.18.0",
+ "version": "1.18.0"
+ },
+ {
"url": "../1.17.8",
"version": "1.17.8"
},
diff --git a/repo.toml b/repo.toml
index 68bfeb1fa..c5d87f62b 100644
--- a/repo.toml
+++ b/repo.toml
@@ -101,8 +101,8 @@ project_build_order = [
docs_root = "${root}/container-toolkit"
project = "container-toolkit"
name = "NVIDIA Container Toolkit"
-version = "1.17.8"
-source_substitutions = {version = "1.17.8"}
+version = "1.18.0"
+source_substitutions = {version = "1.18.0"}
copyright_start = 2020
redirects = [
{ path="concepts.html", target="index.html" },