|
| 1 | +# FCOS: Fully Convolutional One-Stage Object Detection |
| 2 | + |
| 3 | +This project hosts the code for implementing the FCOS algorithm for object detection, as presented in our paper: |
| 4 | + |
| 5 | + FCOS: Fully Convolutional One-Stage Object Detection; |
| 6 | + Tian Zhi, Chunhua Shen, Hao Chen, and Tong He; |
| 7 | + arXiv preprint arXiv:1904.01355 (2019). |
| 8 | + |
| 9 | +The full paper is available at: [https://arxiv.org/abs/1904.01355](https://arxiv.org/abs/1904.01355). |
| 10 | + |
| 11 | +## Highlights |
| 12 | +- **Totally anchor-free:** FCOS completely avoids the complicated computation related to anchor boxes and all hyper-parameters of anchor boxes. |
| 13 | +- **Memory-efficient:** FCOS uses 2x less training memory footprint than its anchor-based counterpart RetinaNet. |
| 14 | +- **Better performance:** The very simple detector achieves better performance (37.1 vs. 36.8) than Faster R-CNN. |
| 15 | +- **Faster training and inference:** With the same hardwares, FCOS also requires less training hours (6.5h vs. 8.8h) and faster inference speed (71ms vs. 126 ms per im) than Faster R-CNN. |
| 16 | +- **State-of-the-art performance:** Without bells and whistles, FCOS achieves state-of-the-art performances. |
| 17 | +It achieves **41.5%** (ResNet-101-FPN) and **43.2%** (ResNeXt-64x4d-101) in AP on coco test-dev. |
| 18 | + |
| 19 | +## Updates |
| 20 | +### 17 May 2019 |
| 21 | + - FCOS has been implemented in [mmdetection](https://github.com/open-mmlab/mmdetection). Many thanks to [@yhcao6](https://github.com/yhcao6) and [@hellock](https://github.com/hellock). |
| 22 | + |
| 23 | +## Required hardware |
| 24 | +We use 8 Nvidia V100 GPUs. \ |
| 25 | +But 4 1080Ti GPUs can also train a fully-fledged ResNet-50-FPN based FCOS since FCOS is memory-efficient. |
| 26 | + |
| 27 | +## Installation |
| 28 | + |
| 29 | +This FCOS implementation is based on [maskrcnn-benchmark](https://github.com/facebookresearch/maskrcnn-benchmark). Therefore the installation is the same as original maskrcnn-benchmark. |
| 30 | + |
| 31 | +Please check [INSTALL.md](INSTALL.md) for installation instructions. |
| 32 | +You may also want to see the original [README.md](MASKRCNN_README.md) of maskrcnn-benchmark. |
| 33 | + |
| 34 | +## A quick demo |
| 35 | +Once the installation is done, you can follow the below steps to run a quick demo. |
| 36 | + |
| 37 | + # assume that you are under the root directory of this project, |
| 38 | + # and you have activated your virtual environment if needed. |
| 39 | + wget https://cloudstor.aarnet.edu.au/plus/s/dDeDPBLEAt19Xrl/download -O FCOS_R_50_FPN_1x.pth |
| 40 | + python demo/fcos_demo.py |
| 41 | + |
| 42 | + |
| 43 | +## Inference |
| 44 | +The inference command line on coco minival split: |
| 45 | + |
| 46 | + python tools/test_net.py \ |
| 47 | + --config-file configs/fcos/fcos_R_50_FPN_1x.yaml \ |
| 48 | + MODEL.WEIGHT models/FCOS_R_50_FPN_1x.pth \ |
| 49 | + TEST.IMS_PER_BATCH 4 |
| 50 | + |
| 51 | +Please note that: |
| 52 | +1) If your model's name is different, please replace `models/FCOS_R_50_FPN_1x.pth` with your own. |
| 53 | +2) If you enounter out-of-memory error, please try to reduce `TEST.IMS_PER_BATCH` to 1. |
| 54 | +3) If you want to evaluate a different model, please change `--config-file` to its config file (in [configs/fcos](configs/fcos)) and `MODEL.WEIGHT` to its weights file. |
| 55 | + |
| 56 | +For your convenience, we provide the following trained models (more models are coming soon). |
| 57 | + |
| 58 | +Model | Total training mem (GB) | Multi-scale training | Testing time / im | AP (minival) | AP (test-dev) | Link |
| 59 | +--- |:---:|:---:|:---:|:---:|:--:|:---: |
| 60 | +FCOS_R_50_FPN_1x | 29.3 | No | 71ms | 37.1 | 37.4 | [download](https://cloudstor.aarnet.edu.au/plus/s/dDeDPBLEAt19Xrl/download) |
| 61 | +FCOS_R_101_FPN_2x | 44.1 | Yes | 74ms | 41.4 | 41.5 | [download](https://cloudstor.aarnet.edu.au/plus/s/vjL3L0AW7vnhRTo/download) |
| 62 | +FCOS_X_101_32x8d_FPN_2x | 72.9 | Yes | 122ms | 42.5 | 42.7 | [download](https://cloudstor.aarnet.edu.au/plus/s/U5myBfGF7MviZ97/download) |
| 63 | +FCOS_X_101_64x4d_FPN_2x | 77.7 | Yes | 140ms | 43.0 | 43.2 | [download](https://cloudstor.aarnet.edu.au/plus/s/wpwoCi4S8iajFi9/download) |
| 64 | + |
| 65 | +[1] *1x and 2x mean the model is trained for 90K and 180K iterations, respectively.* \ |
| 66 | +[2] *We report total training memory footprint on all GPUs instead of the memory footprint per GPU as in maskrcnn-benchmark*. \ |
| 67 | +[3] *All results are obtained with a single model and without any test time data augmentation such as multi-scale, flipping and etc..* \ |
| 68 | +[4] *Our results have been improved since our initial release. If you want to check out our original results, please checkout commit [f4fd589](https://github.com/tianzhi0549/FCOS/tree/f4fd58966f45e64608c00b072c801de7f86b4f3a)*. |
| 69 | + |
| 70 | +## Training |
| 71 | + |
| 72 | +The following command line will train FCOS_R_50_FPN_1x on 8 GPUs with Synchronous Stochastic Gradient Descent (SGD): |
| 73 | + |
| 74 | + python -m torch.distributed.launch \ |
| 75 | + --nproc_per_node=8 \ |
| 76 | + --master_port=$((RANDOM + 10000)) \ |
| 77 | + tools/train_net.py \ |
| 78 | + --skip-test \ |
| 79 | + --config-file configs/fcos/fcos_R_50_FPN_1x.yaml \ |
| 80 | + DATALOADER.NUM_WORKERS 2 \ |
| 81 | + OUTPUT_DIR training_dir/fcos_R_50_FPN_1x |
| 82 | + |
| 83 | +Note that: |
| 84 | +1) If you want to use fewer GPUs, please change `--nproc_per_node` to the number of GPUs. No other settings need to be changed. The total batch size does not depends on `nproc_per_node`. If you want to change the total batch size, please change `SOLVER.IMS_PER_BATCH` in [configs/fcos/fcos_R_50_FPN_1x.yaml](configs/fcos/fcos_R_50_FPN_1x.yaml). |
| 85 | +2) The models will be saved into `OUTPUT_DIR`. |
| 86 | +3) If you want to train FCOS with other backbones, please change `--config-file`. |
| 87 | +4) The link of ImageNet pre-training X-101-64x4d in the code is invalid. Please download the model [here](https://cloudstor.aarnet.edu.au/plus/s/k3ys35075jmU1RP/download). |
| 88 | +5) If you want to train FCOS on your own dataset, please follow this instruction [#54](https://github.com/tianzhi0549/FCOS/issues/54#issuecomment-497558687). |
| 89 | +## Contributing to the project |
| 90 | + |
| 91 | +Any pull requests or issues are welcome. |
| 92 | + |
| 93 | +## Citations |
| 94 | +Please consider citing our paper in your publications if the project helps your research. BibTeX reference is as follows. |
| 95 | +``` |
| 96 | +@article{tian2019fcos, |
| 97 | + title = {{FCOS}: Fully Convolutional One-Stage Object Detection}, |
| 98 | + author = {Tian, Zhi and Shen, Chunhua and Chen, Hao and He, Tong}, |
| 99 | + journal = {arXiv preprint arXiv:1904.01355}, |
| 100 | + year = {2019} |
| 101 | +} |
| 102 | +``` |
| 103 | + |
| 104 | + |
| 105 | +## License |
| 106 | + |
| 107 | +For academic use, this project is licensed under the 2-clause BSD License - see the LICENSE file for details. For commercial use, please contact the authors. |
0 commit comments