Skip to content

Commit 769bd20

Browse files
KCFindstrcopybara-github
authored andcommitted
Remove model_garden_source_model_name from Movinet VCN / VAR notebooks
PiperOrigin-RevId: 741214360
1 parent 0fda02c commit 769bd20

File tree

2 files changed

+4
-6
lines changed

2 files changed

+4
-6
lines changed

notebooks/community/model_garden/model_garden_movinet_action_recognition.ipynb

-1
Original file line numberDiff line numberDiff line change
@@ -766,7 +766,6 @@
766766
" serving_container_predict_route=\"/predict\",\n",
767767
" serving_container_health_route=\"/ping\",\n",
768768
" serving_container_environment_variables=serving_env,\n",
769-
" model_garden_source_model_name=\"publishers/google/models/tfvision-movinet-var\",\n",
770769
")\n",
771770
"\n",
772771
"models[\"model_var\"].wait()\n",

notebooks/community/model_garden/model_garden_movinet_clip_classification.ipynb

+4-5
Original file line numberDiff line numberDiff line change
@@ -384,9 +384,9 @@
384384
"2. Create hyperparameter tuning jobs to train new models\n",
385385
"3. Find and export best models\n",
386386
"\n",
387-
"If you already trained models, please go to the section `Test Trained models`.\n",
387+
"If you already trained models, go to the section `Test Trained models`.\n",
388388
"\n",
389-
"Please select a model:\n",
389+
"Select a model:\n",
390390
"* `model_id`: MoViNet model variant ID, one of `a0`, `a1`, `a2`, `a3`, `a4`, `a5`. The model with a larger number requires more resources to train, and is expected to have a higher accuracy and latency. Here, we use `a0` for demonstration purpose.\n",
391391
"* `model_mode`: MoViNet model type, either `base` or `stream`. The base model has a slightly higher accuracy, while the streaming model is optimized for streaming and faster CPU inference. See [official MoViNet docs](https://github.com/tensorflow/models/tree/master/official/projects/movinet) for more information.\n",
392392
"\n",
@@ -552,7 +552,7 @@
552552
"config_file = f\"https://raw.githubusercontent.com/tensorflow/models/master/official/projects/movinet/configs/yaml/movinet_{config_file}_gpu.yaml\"\n",
553553
"config_file = upload_config_to_gcs(config_file)\n",
554554
"\n",
555-
"# The parameters here are mainly for demonstration purpose. Please update them\n",
555+
"# The parameters here are mainly for demonstration purpose. Update them\n",
556556
"# for better performance.\n",
557557
"trainer_args = {\n",
558558
" \"experiment\": \"movinet_kinetics600\",\n",
@@ -770,7 +770,6 @@
770770
" serving_container_predict_route=\"/predict\",\n",
771771
" serving_container_health_route=\"/ping\",\n",
772772
" serving_container_environment_variables=serving_env,\n",
773-
" model_garden_source_model_name=\"publishers/google/models/tfvision-movinet-vcn\"\n",
774773
")\n",
775774
"\n",
776775
"model.wait()\n",
@@ -808,7 +807,7 @@
808807
"\n",
809808
"We will now run batch predictions with the trained MoViNet clip classification model with [Vertex AI Batch Prediction](https://cloud.google.com/vertex-ai/docs/predictions/get-batch-predictions).\n",
810809
"\n",
811-
"Please prepare an input JSONL file where each line follows [this format](https://cloud.google.com/vertex-ai/docs/video-data/classification/get-predictions?hl=en#input_data_requirements) and store it in a Cloud Storage bucket. The service account should have read access to the buckets containing the trained model and the input data. See [Service accounts overview](https://cloud.google.com/iam/docs/service-account-overview) for more information."
810+
"Prepare an input JSONL file where each line follows [this format](https://cloud.google.com/vertex-ai/docs/video-data/classification/get-predictions?hl=en#input_data_requirements) and store it in a Cloud Storage bucket. The service account should have read access to the buckets containing the trained model and the input data. See [Service accounts overview](https://cloud.google.com/iam/docs/service-account-overview) for more information."
812811
]
813812
},
814813
{

0 commit comments

Comments
 (0)