diff --git a/projects/BEVFusion/README.md b/projects/BEVFusion/README.md index 9d5ebd4c52..7c7fb68a4c 100644 --- a/projects/BEVFusion/README.md +++ b/projects/BEVFusion/README.md @@ -42,7 +42,7 @@ python projects/BEVFusion/demo/multi_modality_demo.py demo/data/nuscenes/n015-20 1. You should train the lidar-only detector first: ```bash -bash tools/dist_train.py projects/BEVFusion/configs/bevfusion_lidar_voxel0075_second_secfpn_8xb4-cyclic-20e_nus-3d.py 8 +bash tools/dist_train.sh projects/BEVFusion/configs/bevfusion_lidar_voxel0075_second_secfpn_8xb4-cyclic-20e_nus-3d.py 8 ``` 2. Download the [Swin pre-trained model](https://download.openmmlab.com/mmdetection3d/v1.1.0_models/bevfusion/swint-nuimages-pretrained.pth). Given the image pre-trained backbone and the lidar-only pre-trained detector, you could train the lidar-camera fusion model: