|
384 | 384 | "2. Create hyperparameter tuning jobs to train new models\n",
|
385 | 385 | "3. Find and export best models\n",
|
386 | 386 | "\n",
|
387 |
| - "If you already trained models, please go to the section `Test Trained models`.\n", |
| 387 | + "If you already trained models, go to the section `Test Trained models`.\n", |
388 | 388 | "\n",
|
389 |
| - "Please select a model:\n", |
| 389 | + "Select a model:\n", |
390 | 390 | "* `model_id`: MoViNet model variant ID, one of `a0`, `a1`, `a2`, `a3`, `a4`, `a5`. The model with a larger number requires more resources to train, and is expected to have a higher accuracy and latency. Here, we use `a0` for demonstration purpose.\n",
|
391 | 391 | "* `model_mode`: MoViNet model type, either `base` or `stream`. The base model has a slightly higher accuracy, while the streaming model is optimized for streaming and faster CPU inference. See [official MoViNet docs](https://github.com/tensorflow/models/tree/master/official/projects/movinet) for more information.\n",
|
392 | 392 | "\n",
|
|
552 | 552 | "config_file = f\"https://raw.githubusercontent.com/tensorflow/models/master/official/projects/movinet/configs/yaml/movinet_{config_file}_gpu.yaml\"\n",
|
553 | 553 | "config_file = upload_config_to_gcs(config_file)\n",
|
554 | 554 | "\n",
|
555 |
| - "# The parameters here are mainly for demonstration purpose. Please update them\n", |
| 555 | + "# The parameters here are mainly for demonstration purpose. Update them\n", |
556 | 556 | "# for better performance.\n",
|
557 | 557 | "trainer_args = {\n",
|
558 | 558 | " \"experiment\": \"movinet_kinetics600\",\n",
|
|
770 | 770 | " serving_container_predict_route=\"/predict\",\n",
|
771 | 771 | " serving_container_health_route=\"/ping\",\n",
|
772 | 772 | " serving_container_environment_variables=serving_env,\n",
|
773 |
| - " model_garden_source_model_name=\"publishers/google/models/tfvision-movinet-vcn\"\n", |
774 | 773 | ")\n",
|
775 | 774 | "\n",
|
776 | 775 | "model.wait()\n",
|
|
808 | 807 | "\n",
|
809 | 808 | "We will now run batch predictions with the trained MoViNet clip classification model with [Vertex AI Batch Prediction](https://cloud.google.com/vertex-ai/docs/predictions/get-batch-predictions).\n",
|
810 | 809 | "\n",
|
811 |
| - "Please prepare an input JSONL file where each line follows [this format](https://cloud.google.com/vertex-ai/docs/video-data/classification/get-predictions?hl=en#input_data_requirements) and store it in a Cloud Storage bucket. The service account should have read access to the buckets containing the trained model and the input data. See [Service accounts overview](https://cloud.google.com/iam/docs/service-account-overview) for more information." |
| 810 | + "Prepare an input JSONL file where each line follows [this format](https://cloud.google.com/vertex-ai/docs/video-data/classification/get-predictions?hl=en#input_data_requirements) and store it in a Cloud Storage bucket. The service account should have read access to the buckets containing the trained model and the input data. See [Service accounts overview](https://cloud.google.com/iam/docs/service-account-overview) for more information." |
812 | 811 | ]
|
813 | 812 | },
|
814 | 813 | {
|
|
0 commit comments