Skip to content

Commit 258acf4

Browse files
committed
Updating versions for v25.02.00
1 parent c10f0ba commit 258acf4

File tree

36 files changed

+76
-76
lines changed

36 files changed

+76
-76
lines changed

.gitmodules

+2-2
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
[submodule "external/morpheus-visualizations"]
22
path = external/morpheus-visualizations
33
url = https://github.com/nv-morpheus/morpheus-visualizations.git
4-
branch = branch-24.10
4+
branch = branch-25.02
55
[submodule "external/utilities"]
66
path = external/utilities
77
url = https://github.com/nv-morpheus/utilities.git
8-
branch = branch-24.10
8+
branch = branch-25.02

CMakeLists.txt

+1-1
Original file line numberDiff line numberDiff line change
@@ -99,7 +99,7 @@ morpheus_utils_initialize_cuda_arch(morpheus)
9999
# Note intentionally excluding CUDA from the LANGUAGES list allowing us to set some clang specific settings later when
100100
# we call morpheus_utils_enable_cuda()
101101
project(morpheus
102-
VERSION 24.10.00
102+
VERSION 25.02.00
103103
LANGUAGES C CXX
104104
)
105105

conda/environments/all_cuda-125_arch-x86_64.yaml

+1-1
Original file line numberDiff line numberDiff line change
@@ -67,7 +67,7 @@ dependencies:
6767
- libwebp=1.3.2
6868
- libzlib >=1.3.1,<2
6969
- mlflow
70-
- mrc=24.10
70+
- mrc=25.02
7171
- myst-parser=0.18.1
7272
- nbsphinx
7373
- networkx=2.8.8

conda/environments/dev_cuda-125_arch-x86_64.yaml

+1-1
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ dependencies:
5757
- libwebp=1.3.2
5858
- libzlib >=1.3.1,<2
5959
- mlflow
60-
- mrc=24.10
60+
- mrc=25.02
6161
- myst-parser=0.18.1
6262
- nbsphinx
6363
- networkx=2.8.8

conda/environments/examples_cuda-125_arch-x86_64.yaml

+1-1
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ dependencies:
3030
- kfp
3131
- libwebp=1.3.2
3232
- mlflow
33-
- mrc=24.10
33+
- mrc=25.02
3434
- networkx=2.8.8
3535
- newspaper3k=0.2
3636
- nodejs=18.*

conda/environments/runtime_cuda-125_arch-x86_64.yaml

+1-1
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ dependencies:
2727
- grpcio-status
2828
- libwebp=1.3.2
2929
- mlflow
30-
- mrc=24.10
30+
- mrc=25.02
3131
- networkx=2.8.8
3232
- numpydoc=1.5
3333
- pip

dependencies.yaml

+2-2
Original file line numberDiff line numberDiff line change
@@ -284,7 +284,7 @@ dependencies:
284284
- libcudf=24.10
285285
- librdkafka>=1.9.2,<1.10.0a0
286286
- libzlib >=1.3.1,<2
287-
- mrc=24.10
287+
- mrc=25.02
288288
- nlohmann_json=3.11
289289
- pybind11-stubgen=0.10.5
290290
- pylibcudf=24.10
@@ -364,7 +364,7 @@ dependencies:
364364
- grpcio-status
365365
# - libwebp=1.3.2 # Required for CVE mitigation: https://nvd.nist.gov/vuln/detail/CVE-2023-4863 ##
366366
- mlflow #>=2.10.0,<3
367-
- mrc=24.10
367+
- mrc=25.02
368368
- networkx=2.8.8
369369
- numpydoc=1.5
370370
- pydantic

docs/source/basics/building_a_pipeline.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -207,7 +207,7 @@ This example shows an NLP Pipeline which uses several stages available in Morphe
207207
#### Launching Triton
208208
Run the following to launch Triton and load the `sid-minibert` model:
209209
```bash
210-
docker run --rm -ti --gpus=all -p8000:8000 -p8001:8001 -p8002:8002 nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:24.10 tritonserver --model-repository=/models/triton-model-repo --exit-on-error=false --model-control-mode=explicit --load-model sid-minibert-onnx
210+
docker run --rm -ti --gpus=all -p8000:8000 -p8001:8001 -p8002:8002 nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.02 tritonserver --model-repository=/models/triton-model-repo --exit-on-error=false --model-control-mode=explicit --load-model sid-minibert-onnx
211211
```
212212

213213
#### Launching Kafka

docs/source/cloud_deployment_guide.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -104,7 +104,7 @@ The Helm chart (`morpheus-ai-engine`) that offers the auxiliary components requi
104104
Follow the below steps to install Morpheus AI Engine:
105105

106106
```bash
107-
helm fetch https://helm.ngc.nvidia.com/nvidia/morpheus/charts/morpheus-ai-engine-24.10.tgz --username='$oauthtoken' --password=$API_KEY --untar
107+
helm fetch https://helm.ngc.nvidia.com/nvidia/morpheus/charts/morpheus-ai-engine-25.02.tgz --username='$oauthtoken' --password=$API_KEY --untar
108108
```
109109
```bash
110110
helm install --set ngc.apiKey="$API_KEY" \
@@ -146,7 +146,7 @@ replicaset.apps/zookeeper-87f9f4dd 1 1 1 54s
146146
Run the following command to pull the Morpheus SDK Client (referred to as Helm chart `morpheus-sdk-client`) on to your instance:
147147

148148
```bash
149-
helm fetch https://helm.ngc.nvidia.com/nvidia/morpheus/charts/morpheus-sdk-client-24.10.tgz --username='$oauthtoken' --password=$API_KEY --untar
149+
helm fetch https://helm.ngc.nvidia.com/nvidia/morpheus/charts/morpheus-sdk-client-25.02.tgz --username='$oauthtoken' --password=$API_KEY --untar
150150
```
151151

152152
#### Morpheus SDK Client in Sleep Mode
@@ -184,7 +184,7 @@ kubectl -n $NAMESPACE exec sdk-cli-helper -- cp -RL /workspace/models /common
184184
The Morpheus MLflow Helm chart offers MLflow server with Triton plugin to deploy, update, and remove models from the Morpheus AI Engine. The MLflow server UI can be accessed using NodePort `30500`. Follow the below steps to install the Morpheus MLflow:
185185

186186
```bash
187-
helm fetch https://helm.ngc.nvidia.com/nvidia/morpheus/charts/morpheus-mlflow-24.10.tgz --username='$oauthtoken' --password=$API_KEY --untar
187+
helm fetch https://helm.ngc.nvidia.com/nvidia/morpheus/charts/morpheus-mlflow-25.02.tgz --username='$oauthtoken' --password=$API_KEY --untar
188188
```
189189
```bash
190190
helm install --set ngc.apiKey="$API_KEY" \

docs/source/developer_guide/guides/2_real_world_phishing.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -235,7 +235,7 @@ We will launch a Triton Docker container with:
235235

236236
```shell
237237
docker run --rm -ti --gpus=all -p8000:8000 -p8001:8001 -p8002:8002 \
238-
nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:24.10 \
238+
nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.02 \
239239
tritonserver --model-repository=/models/triton-model-repo \
240240
--exit-on-error=false \
241241
--log-info=true \

docs/source/examples.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ Morpheus supports multiple environments, each environment is intended to support
4040

4141
In addition to this many of the examples utilize the Morpheus Triton Models container which can be obtained by running the following command:
4242
```bash
43-
docker pull nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:24.10
43+
docker pull nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.02
4444
```
4545

4646
The following are the supported environments:

docs/source/getting_started.md

+10-10
Original file line numberDiff line numberDiff line change
@@ -41,26 +41,26 @@ More advanced users, or those who are interested in using the latest pre-release
4141
### Pull the Morpheus Image
4242
1. Go to [https://catalog.ngc.nvidia.com/orgs/nvidia/teams/morpheus/containers/morpheus/tags](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/morpheus/containers/morpheus/tags)
4343
1. Choose a version
44-
1. Download the selected version, for example for `24.10`:
44+
1. Download the selected version, for example for `25.02`:
4545
```bash
46-
docker pull nvcr.io/nvidia/morpheus/morpheus:24.10-runtime
46+
docker pull nvcr.io/nvidia/morpheus/morpheus:25.02-runtime
4747
```
4848
1. Optional, many of the examples require NVIDIA Triton Inference Server to be running with the included models. To download the Morpheus Triton Server Models container (ensure that the version number matches that of the Morpheus container you downloaded in the previous step):
4949
```bash
50-
docker pull nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:24.10
50+
docker pull nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.02
5151
```
5252

5353
> **Note about Morpheus versions:**
5454
>
55-
> Morpheus uses Calendar Versioning ([CalVer](https://calver.org/)). For each Morpheus release there will be an image tagged in the form of `YY.MM-runtime` this tag will always refer to the latest point release for that version. In addition to this there will also be at least one point release version tagged in the form of `vYY.MM.00-runtime` this will be the initial point release for that version (ex. `v24.10.00-runtime`). In the event of a major bug, we may release additional point releases (ex. `v24.10.01-runtime`, `v24.10.02-runtime` etc...), and the `YY.MM-runtime` tag will be updated to reference that point release.
55+
> Morpheus uses Calendar Versioning ([CalVer](https://calver.org/)). For each Morpheus release there will be an image tagged in the form of `YY.MM-runtime` this tag will always refer to the latest point release for that version. In addition to this there will also be at least one point release version tagged in the form of `vYY.MM.00-runtime` this will be the initial point release for that version (ex. `v25.02.00-runtime`). In the event of a major bug, we may release additional point releases (ex. `v25.02.01-runtime`, `v25.02.02-runtime` etc...), and the `YY.MM-runtime` tag will be updated to reference that point release.
5656
>
5757
> Users who want to ensure they are running with the latest bug fixes should use a release image tag (`YY.MM-runtime`). Users who need to deploy a specific version into production should use a point release image tag (`vYY.MM.00-runtime`).
5858

5959
### Starting the Morpheus Container
6060
1. Ensure that [The NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html#installation) is installed.
6161
1. Start the container downloaded from the previous section:
6262
```bash
63-
docker run --rm -ti --runtime=nvidia --gpus=all --net=host -v /var/run/docker.sock:/var/run/docker.sock nvcr.io/nvidia/morpheus/morpheus:24.10-runtime bash
63+
docker run --rm -ti --runtime=nvidia --gpus=all --net=host -v /var/run/docker.sock:/var/run/docker.sock nvcr.io/nvidia/morpheus/morpheus:25.02-runtime bash
6464
```
6565

6666
Note about some of the flags above:
@@ -140,17 +140,17 @@ To run the built "release" container, use the following:
140140
./docker/run_container_release.sh
141141
```
142142

143-
The `./docker/run_container_release.sh` script accepts the same `DOCKER_IMAGE_NAME`, and `DOCKER_IMAGE_TAG` environment variables that the `./docker/build_container_release.sh` script does. For example, to run version `v24.10.00` use the following:
143+
The `./docker/run_container_release.sh` script accepts the same `DOCKER_IMAGE_NAME`, and `DOCKER_IMAGE_TAG` environment variables that the `./docker/build_container_release.sh` script does. For example, to run version `v25.02.00` use the following:
144144

145145
```bash
146-
DOCKER_IMAGE_TAG="v24.10.00-runtime" ./docker/run_container_release.sh
146+
DOCKER_IMAGE_TAG="v25.02.00-runtime" ./docker/run_container_release.sh
147147
```
148148

149149
## Acquiring the Morpheus Models Container
150150

151151
Many of the validation tests and example workflows require a Triton server to function. For simplicity Morpheus provides a pre-built models container which contains both Triton and the Morpheus models. Users using a release version of Morpheus can download the corresponding Triton models container from NGC with the following command:
152152
```bash
153-
docker pull nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:24.10
153+
docker pull nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.02
154154
```
155155

156156
Users working with an unreleased development version of Morpheus can build the Triton models container from the Morpheus repository. To build the Triton models container, from the root of the Morpheus repository run the following command:
@@ -163,7 +163,7 @@ models/docker/build_container.sh
163163
In a new terminal use the following command to launch a Docker container for Triton loading all of the included pre-trained models:
164164
```bash
165165
docker run --rm -ti --gpus=all -p8000:8000 -p8001:8001 -p8002:8002 \
166-
nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:24.10 \
166+
nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.02 \
167167
tritonserver --model-repository=/models/triton-model-repo \
168168
--exit-on-error=false \
169169
--log-info=true \
@@ -176,7 +176,7 @@ This will launch Triton using the default network ports (8000 for HTTP, 8001 for
176176
Note: The above command is useful for testing out Morpheus, however it does load several models into GPU memory, which at time of writing consumes roughly 2GB of GPU memory. Production users should consider only loading the specific models they plan on using with the `--model-control-mode=explicit` and `--load-model` flags. For example to launch Triton only loading the `abp-nvsmi-xgb` model:
177177
```bash
178178
docker run --rm -ti --gpus=all -p8000:8000 -p8001:8001 -p8002:8002 \
179-
nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:24.10 \
179+
nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.02 \
180180
tritonserver --model-repository=/models/triton-model-repo \
181181
--exit-on-error=false \
182182
--log-info=true \

examples/abp_nvsmi_detection/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -89,12 +89,12 @@ This example utilizes the Triton Inference Server to perform inference.
8989

9090
Pull the Docker image for Triton:
9191
```bash
92-
docker pull nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:24.10
92+
docker pull nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.02
9393
```
9494

9595
Run the following to launch Triton and load the `abp-nvsmi-xgb` XGBoost model:
9696
```bash
97-
docker run --rm -ti --gpus=all -p8000:8000 -p8001:8001 -p8002:8002 nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:24.10 tritonserver --model-repository=/models/triton-model-repo --exit-on-error=false --model-control-mode=explicit --load-model abp-nvsmi-xgb
97+
docker run --rm -ti --gpus=all -p8000:8000 -p8001:8001 -p8002:8002 nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.02 tritonserver --model-repository=/models/triton-model-repo --exit-on-error=false --model-control-mode=explicit --load-model abp-nvsmi-xgb
9898
```
9999

100100
This will launch Triton and only load the `abp-nvsmi-xgb` model. This model has been configured with a max batch size of 32768, and to use dynamic batching for increased performance.

examples/abp_pcap_detection/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -30,13 +30,13 @@ To run this example, an instance of Triton Inference Server and a sample dataset
3030

3131
### Triton Inference Server
3232
```bash
33-
docker pull nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:24.10
33+
docker pull nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.02
3434
```
3535

3636
##### Deploy Triton Inference Server
3737
Run the following to launch Triton and load the `abp-pcap-xgb` model:
3838
```bash
39-
docker run --rm --gpus=all -p 8000:8000 -p 8001:8001 -p 8002:8002 --name tritonserver nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:24.10 tritonserver --model-repository=/models/triton-model-repo --exit-on-error=false --model-control-mode=explicit --load-model abp-pcap-xgb
39+
docker run --rm --gpus=all -p 8000:8000 -p 8001:8001 -p 8002:8002 --name tritonserver nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.02 tritonserver --model-repository=/models/triton-model-repo --exit-on-error=false --model-control-mode=explicit --load-model abp-pcap-xgb
4040
```
4141

4242
##### Verify Model Deployment

examples/developer_guide/3_simple_cpp_stage/CMakeLists.txt

+1-1
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ mark_as_advanced(MORPHEUS_CACHE_DIR)
2525
list(PREPEND CMAKE_PREFIX_PATH "$ENV{CONDA_PREFIX}")
2626

2727
project(3_simple_cpp_stage
28-
VERSION 24.10.00
28+
VERSION 25.02.00
2929
LANGUAGES C CXX
3030
)
3131

examples/developer_guide/4_rabbitmq_cpp_stage/CMakeLists.txt

+1-1
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ list(PREPEND CMAKE_PREFIX_PATH "$ENV{CONDA_PREFIX}")
2626
list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/cmake")
2727

2828
project(4_rabbitmq_cpp_stage
29-
VERSION 24.10.00
29+
VERSION 25.02.00
3030
LANGUAGES C CXX
3131
)
3232

examples/digital_fingerprinting/production/Dockerfile

+1-1
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@
1414
# limitations under the License.
1515

1616
ARG MORPHEUS_CONTAINER=nvcr.io/nvidia/morpheus/morpheus
17-
ARG MORPHEUS_CONTAINER_VERSION=v24.10.00-runtime
17+
ARG MORPHEUS_CONTAINER_VERSION=v25.02.00-runtime
1818

1919
FROM ${MORPHEUS_CONTAINER}:${MORPHEUS_CONTAINER_VERSION} as base
2020

examples/digital_fingerprinting/production/docker-compose.yml

+2-2
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@ services:
7474
target: jupyter
7575
args:
7676
- MORPHEUS_CONTAINER=${MORPHEUS_CONTAINER:-nvcr.io/nvidia/morpheus/morpheus}
77-
- MORPHEUS_CONTAINER_VERSION=${MORPHEUS_CONTAINER_VERSION:-v24.10.00-runtime}
77+
- MORPHEUS_CONTAINER_VERSION=${MORPHEUS_CONTAINER_VERSION:-v25.02.00-runtime}
7878
deploy:
7979
resources:
8080
reservations:
@@ -106,7 +106,7 @@ services:
106106
target: runtime
107107
args:
108108
- MORPHEUS_CONTAINER=${MORPHEUS_CONTAINER:-nvcr.io/nvidia/morpheus/morpheus}
109-
- MORPHEUS_CONTAINER_VERSION=${MORPHEUS_CONTAINER_VERSION:-v24.10.00-runtime}
109+
- MORPHEUS_CONTAINER_VERSION=${MORPHEUS_CONTAINER_VERSION:-v25.02.00-runtime}
110110
image: dfp_morpheus
111111
container_name: morpheus_pipeline
112112
deploy:

examples/doca/vdb_realtime/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ To serve the embedding model, we will use Triton:
4949
cd ${MORPHEUS_ROOT}
5050

5151
# Launch Triton
52-
docker run --rm -ti --gpus=all -p8000:8000 -p8001:8001 -p8002:8002 nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:24.10 tritonserver --model-repository=/models/triton-model-repo --exit-on-error=false --model-control-mode=explicit --load-model all-MiniLM-L6-v2
52+
docker run --rm -ti --gpus=all -p8000:8000 -p8001:8001 -p8002:8002 nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.02 tritonserver --model-repository=/models/triton-model-repo --exit-on-error=false --model-control-mode=explicit --load-model all-MiniLM-L6-v2
5353
```
5454

5555
## Populate the Milvus database

examples/llm/vdb_upload/README.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -138,12 +138,12 @@ To retrieve datasets from LFS run the following:
138138

139139
- Pull the Docker image for Triton:
140140
```bash
141-
docker pull nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:24.10
141+
docker pull nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.02
142142
```
143143

144144
- Run the following to launch Triton and load the `all-MiniLM-L6-v2` model:
145145
```bash
146-
docker run --rm -ti --gpus=all -p8000:8000 -p8001:8001 -p8002:8002 nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:24.10 tritonserver --model-repository=/models/triton-model-repo --exit-on-error=false --model-control-mode=explicit --load-model all-MiniLM-L6-v2
146+
docker run --rm -ti --gpus=all -p8000:8000 -p8001:8001 -p8002:8002 nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.02 tritonserver --model-repository=/models/triton-model-repo --exit-on-error=false --model-control-mode=explicit --load-model all-MiniLM-L6-v2
147147
```
148148

149149
This will launch Triton and only load the `all-MiniLM-L6-v2` model. Once Triton has loaded the model, the following
@@ -277,7 +277,7 @@ using `sentence-transformers/paraphrase-multilingual-mpnet-base-v2` as an exampl
277277
- Reload the docker container, specifying that we also need to load paraphrase-multilingual-mpnet-base-v2
278278
```bash
279279
docker run --rm -ti --gpus=all -p8000:8000 -p8001:8001 -p8002:8002 \
280-
nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:24.10 tritonserver \
280+
nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.02 tritonserver \
281281
--model-repository=/models/triton-model-repo --exit-on-error=false --model-control-mode=explicit --load-model \
282282
all-MiniLM-L6-v2 --load-model sentence-transformers/paraphrase-multilingual-mpnet-base-v2
283283
```

examples/log_parsing/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -34,14 +34,14 @@ Pull the Morpheus Triton models Docker image from NGC.
3434
Example:
3535

3636
```bash
37-
docker pull nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:24.10
37+
docker pull nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.02
3838
```
3939

4040
##### Start Triton Inference Server Container
4141
From the Morpheus repo root directory, run the following to launch Triton and load the `log-parsing-onnx` model:
4242

4343
```bash
44-
docker run --rm -ti --gpus=all -p8000:8000 -p8001:8001 -p8002:8002 nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:24.10 tritonserver --model-repository=/models/triton-model-repo --exit-on-error=false --model-control-mode=explicit --load-model log-parsing-onnx
44+
docker run --rm -ti --gpus=all -p8000:8000 -p8001:8001 -p8002:8002 nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.02 tritonserver --model-repository=/models/triton-model-repo --exit-on-error=false --model-control-mode=explicit --load-model log-parsing-onnx
4545
```
4646

4747
##### Verify Model Deployment

examples/nlp_si_detection/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -85,7 +85,7 @@ This example utilizes the Triton Inference Server to perform inference. The neur
8585
From the Morpheus repo root directory, run the following to launch Triton and load the `sid-minibert` model:
8686

8787
```bash
88-
docker run --rm -ti --gpus=all -p8000:8000 -p8001:8001 -p8002:8002 nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:24.10 tritonserver --model-repository=/models/triton-model-repo --exit-on-error=false --model-control-mode=explicit --load-model sid-minibert-onnx
88+
docker run --rm -ti --gpus=all -p8000:8000 -p8001:8001 -p8002:8002 nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.02 tritonserver --model-repository=/models/triton-model-repo --exit-on-error=false --model-control-mode=explicit --load-model sid-minibert-onnx
8989
```
9090

9191
This will launch Triton and only load the `sid-minibert-onnx` model. This model has been configured with a max batch size of 32, and to use dynamic batching for increased performance.

examples/ransomware_detection/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -35,15 +35,15 @@ Pull Docker image from NGC (https://ngc.nvidia.com/catalog/containers/nvidia:tri
3535
Example:
3636

3737
```bash
38-
docker pull nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:24.10
38+
docker pull nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.02
3939
```
4040

4141
##### Start Triton Inference Server Container
4242
From the Morpheus repo root directory, run the following to launch Triton and load the `ransomw-model-short-rf` model:
4343
```bash
4444
# Run Triton in explicit mode
4545
docker run --rm -ti --gpus=all -p8000:8000 -p8001:8001 -p8002:8002 \
46-
nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:24.10 \
46+
nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:25.02 \
4747
tritonserver --model-repository=/models/triton-model-repo \
4848
--exit-on-error=false \
4949
--model-control-mode=explicit \

0 commit comments

Comments
 (0)