Skip to content

Commit 02bd224

Browse files
[Model] Support YOLOv8 (#1137)
* add GPL lisence * add GPL-3.0 lisence * add GPL-3.0 lisence * add GPL-3.0 lisence * support yolov8 * add pybind for yolov8 * add yolov8 readme Co-authored-by: DefTruth <[email protected]>
1 parent a4b94b2 commit 02bd224

File tree

28 files changed

+1448
-80
lines changed

28 files changed

+1448
-80
lines changed

Diff for: examples/vision/detection/scaledyolov4/README.md

100644100755
+2-4
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ English | [简体中文](README_CN.md)
1111

1212

1313
Visit the official [ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4) github repository, follow the guidelines to download the `scaledyolov4.pt` model, and employ `models/export.py` to get the file in `onnx` format. If you have any problems with the exported `onnx` model, refer to [ScaledYOLOv4#401](https://github.com/WongKinYiu/ScaledYOLOv4/issues/401) for solution.
14-
14+
1515

1616
```bash
1717
# Download the ScaledYOLOv4 model file
@@ -38,8 +38,6 @@ For developers' testing, models exported by ScaledYOLOv4 are provided below. Dev
3838
| [ScaledYOLOv4-P6+BoF](https://bj.bcebos.com/paddlehub/fastdeploy/scaled_yolov4-p6_.onnx) | 487MB | - | This model file is sourced from [ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4),GPL-3.0 License |
3939
| [ScaledYOLOv4-P7](https://bj.bcebos.com/paddlehub/fastdeploy/scaled_yolov4-p7.onnx) | 1.1GB | - | This model file is sourced from [ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4),GPL-3.0 License |
4040

41-
42-
4341
## Detailed Deployment Documents
4442

4543
- [Python Deployment](python)
@@ -48,4 +46,4 @@ For developers' testing, models exported by ScaledYOLOv4 are provided below. Dev
4846

4947
## Release Note
5048

51-
- Document and code are based on [ScaledYOLOv4 CommitID: 6768003](https://github.com/WongKinYiu/ScaledYOLOv4/commit/676800364a3446900b9e8407bc880ea2127b3415)
49+
- Document and code are based on [ScaledYOLOv4 CommitID: 6768003](https://github.com/WongKinYiu/ScaledYOLOv4/commit/676800364a3446900b9e8407bc880ea2127b3415)

Diff for: examples/vision/detection/yolor/README.md

100644100755
-1
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,6 @@ For developers' testing, models exported by YOLOR are provided below. Developers
3636
| [YOLOR-D6](https://bj.bcebos.com/paddlehub/fastdeploy/yolor-d6-paper-570-640-640.onnx) | 580MB | - | This model file is sourced from [YOLOR](https://github.com/WongKinYiu/yolor),GPL-3.0 License |
3737
| [YOLOR-D6](https://bj.bcebos.com/paddlehub/fastdeploy/yolor-d6-paper-573-640-640.onnx) | 580MB | - | This model file is sourced from [YOLOR](https://github.com/WongKinYiu/yolor),GPL-3.0 License |
3838

39-
4039
## Detailed Deployment Documents
4140

4241
- [Python Deployment](python)

Diff for: examples/vision/detection/yolov5/README.md

100644100755
+1-2
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,6 @@ English | [简体中文](README_CN.md)
66
- (1)The *.onnx provided by [Official Repository](https://github.com/ultralytics/yolov5/releases/tag/v7.0) can be deployed directly;
77
- (2)The YOLOv5 v7.0 model trained by personal data should employ `export.py` in [YOLOv5](https://github.com/ultralytics/yolov5) to export the ONNX files for deployment.
88

9-
109
## Download Pre-trained ONNX Model
1110

1211
For developers' testing, models exported by YOLOv5 are provided below. Developers can download them directly. (The accuracy in the following table is derived from the source official repository)
@@ -27,4 +26,4 @@ For developers' testing, models exported by YOLOv5 are provided below. Developer
2726

2827
## Release Note
2928

30-
- Document and code are based on [YOLOv5 v7.0](https://github.com/ultralytics/yolov5/tree/v7.0)
29+
- Document and code are based on [YOLOv5 v7.0](https://github.com/ultralytics/yolov5/tree/v7.0)

Diff for: examples/vision/detection/yolov5lite/README.md

100644100755
+1-2
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,6 @@ For developers' testing, models exported by YOLOv5Lite are provided below. Devel
6060
| [YOLOv5Lite-c](https://bj.bcebos.com/paddlehub/fastdeploy/v5Lite-c-sim-512.onnx) | 18MB | 50.9% | This model file is sourced from[YOLOv5-Lite](https://github.com/ppogg/YOLOv5-Lite),GPL-3.0 License |
6161
| [YOLOv5Lite-g](https://bj.bcebos.com/paddlehub/fastdeploy/v5Lite-g-sim-640.onnx) | 21MB | 57.6% | This model file is sourced from [YOLOv5-Lite](https://github.com/ppogg/YOLOv5-Lite),GPL-3.0 License |
6262

63-
6463
## Detailed Deployment Documents
6564

6665
- [Python Deployment](python)
@@ -69,4 +68,4 @@ For developers' testing, models exported by YOLOv5Lite are provided below. Devel
6968

7069
## Release Note
7170

72-
- Document and code are based on [YOLOv5-Lite v1.4](https://github.com/ppogg/YOLOv5-Lite/releases/tag/v1.4)
71+
- Document and code are based on [YOLOv5-Lite v1.4](https://github.com/ppogg/YOLOv5-Lite/releases/tag/v1.4)

Diff for: examples/vision/detection/yolov6/README.md

100644100755
+1-5
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,6 @@ English | [简体中文](README_CN.md)
88
- (1)The *.onnx provided by [Official Repository](https://github.com/meituan/YOLOv6/releases/tag/0.1.0) can directly conduct deployemnt;
99
- (2)Personal models trained by developers should export the ONNX model. Refer to [Detailed Deployment Documents](#Detailed-Deployment-Documents) to complete the deployment.
1010

11-
12-
1311
## Download Pre-trained ONNX Model
1412

1513
For developers' testing, models exported by YOLOv6 are provided below. Developers can download them directly. (The accuracy in the following table is derived from the source official repository)
@@ -20,8 +18,6 @@ For developers' testing, models exported by YOLOv6 are provided below. Developer
2018
| [YOLOv6t](https://bj.bcebos.com/paddlehub/fastdeploy/yolov6t.onnx) | 58MB | 41.3% | This model file is sourced from [YOLOv6](https://github.com/meituan/YOLOv6),GPL-3.0 License |
2119
| [YOLOv6n](https://bj.bcebos.com/paddlehub/fastdeploy/yolov6n.onnx) | 17MB | 35.0% | This model file is sourced from [YOLOv6](https://github.com/meituan/YOLOv6),GPL-3.0 License |
2220

23-
24-
2521
## Detailed Deployment Documents
2622

2723
- [Python Deployment](python)
@@ -30,4 +26,4 @@ For developers' testing, models exported by YOLOv6 are provided below. Developer
3026

3127
## Release Note
3228

33-
- Document and code are based on [YOLOv6 0.1.0 version](https://github.com/meituan/YOLOv6/releases/tag/0.1.0)
29+
- Document and code are based on [YOLOv6 0.1.0 version](https://github.com/meituan/YOLOv6/releases/tag/0.1.0)

Diff for: examples/vision/detection/yolov7/README.md

100644100755
-2
Original file line numberDiff line numberDiff line change
@@ -20,8 +20,6 @@ python models/export.py --grid --dynamic --weights PATH/TO/yolov7.pt
2020
python models/export.py --grid --dynamic --end2end --weights PATH/TO/yolov7.pt
2121
```
2222

23-
## Download the pre-trained ONNX model
24-
2523
To facilitate testing for developers, we provide below the models exported by YOLOv7, which developers can download and use directly. (The accuracy of the models in the table is sourced from the official library)
2624

2725
| Model | Size | Accuracy | Note |

Diff for: examples/vision/detection/yolov7end2end_ort/README.md

100644100755
+1-3
Original file line numberDiff line numberDiff line change
@@ -31,13 +31,11 @@ For developers' testing, models exported by YOLOv7End2EndORT are provided below.
3131
| [yolov7-d6-end2end-ort-nms](https://bj.bcebos.com/paddlehub/fastdeploy/yolov7-d6-end2end-ort-nms.onnx) | 511MB | 56.6% | This model file is sourced from [YOLOv7](https://github.com/WongKinYiu/yolov7),GPL-3.0 License |
3232
| [yolov7-e6e-end2end-ort-nms](https://bj.bcebos.com/paddlehub/fastdeploy/yolov7-e6e-end2end-ort-nms.onnx) | 579MB | 56.8% | This model file is sourced from [YOLOv7](https://github.com/WongKinYiu/yolov7),GPL-3.0 License |
3333

34-
3534
## Detailed Deployment Documents
3635

3736
- [Python Deployment](python)
3837
- [C++ Deployment](cpp)
3938

40-
4139
## Release Note
4240

43-
- Document and code are based on [YOLOv7 0.1](https://github.com/WongKinYiu/yolov7/tree/v0.1)
41+
- Document and code are based on [YOLOv7 0.1](https://github.com/WongKinYiu/yolov7/tree/v0.1)

Diff for: examples/vision/detection/yolov7end2end_trt/README.md

100644100755
+1-4
Original file line numberDiff line numberDiff line change
@@ -6,8 +6,6 @@ The YOLOv7End2EndTRT deployment is based on [YOLOv7](https://github.com/WongKinY
66
- (1)*.pt provided by [Official Repository](https://github.com/WongKinYiu/yolov7/releases/tag/v0.1) should [Export the ONNX Model](#Export-the-ONNX-Model) to complete the deployment. The deployment of *.trt and *.pose models is not supported.
77
- (2)The YOLOv7 model trained by personal data should [Export the ONNX Model](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B). Please refer to [Detailed Deployment Documents](#Detailed-Deployment-Documents) to complete the deployment.
88

9-
10-
119
## Export the ONNX Model
1210

1311
```bash
@@ -37,7 +35,6 @@ For developers' testing, models exported by YOLOv7End2EndTRT are provided below.
3735
- [Python Deployment](python)
3836
- [C++ Deployement](cpp)
3937

40-
4138
## Release Note
4239

43-
- Document and code are based on [YOLOv7 0.1](https://github.com/WongKinYiu/yolov7/tree/v0.1)
40+
- Document and code are based on [YOLOv7 0.1](https://github.com/WongKinYiu/yolov7/tree/v0.1)

Diff for: examples/vision/detection/yolov8/README.md

+29
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
English | [简体中文](README_CN.md)
2+
3+
# YOLOv8 Ready-to-deploy Model
4+
5+
- The deployment of the YOLOv8 model is based on [YOLOv8](https://github.com/ultralytics/ultralytics) and [Pre-trained Model Based on COCO](https://github.com/ultralytics/ultralytics)
6+
- (1)The *.onnx provided by [Official Repository](https://github.com/ultralytics/ultralytics) can be deployed directly;
7+
- (2)The YOLOv8 model trained by personal data should employ `export.py` in [YOLOv8](https://github.com/ultralytics/ultralytics) to export the ONNX files for deployment.
8+
9+
## Download Pre-trained ONNX Model
10+
11+
For developers' testing, models exported by YOLOv8 are provided below. Developers can download them directly. (The accuracy in the following table is derived from the source official repository)
12+
| Model | Size | Accuracy | Note |
13+
|:---------------------------------------------------------------- |:----- |:----- |:---- |
14+
| [YOLOv8n](https://bj.bcebos.com/paddlehub/fastdeploy/yolov8n.onnx) | 12.1MB | 37.3% | This model file is sourced from [YOLOv8](https://github.com/ultralytics/ultralytics),GPL-3.0 License |
15+
| [YOLOv8s](https://bj.bcebos.com/paddlehub/fastdeploy/yolov8s.onnx) | 42.6MB | 44.9% | This model file is sourced from [YOLOv8](https://github.com/ultralytics/ultralytics),GPL-3.0 License |
16+
| [YOLOv8m](https://bj.bcebos.com/paddlehub/fastdeploy/yolov8m.onnx) | 98.8MB | 50.2% | This model file is sourced from [YOLOv8](https://github.com/ultralytics/ultralytics),GPL-3.0 License |
17+
| [YOLOv8l](https://bj.bcebos.com/paddlehub/fastdeploy/yolov8l.onnx) | 166.7MB | 52.9% | This model file is sourced from [YOLOv8](https://github.com/ultralytics/ultralytics),GPL-3.0 License |
18+
| [YOLOv8x](https://bj.bcebos.com/paddlehub/fastdeploy/yolov8x.onnx) | 260.3MB | 53.9% | This model file is sourced from [YOLOv8](https://github.com/ultralytics/ultralytics),GPL-3.0 License |
19+
20+
21+
## Detailed Deployment Documents
22+
23+
- [Python Deployment](python)
24+
- [C++ Deployment](cpp)
25+
- [Serving Deployment](serving)
26+
27+
## Release Note
28+
29+
- Document and code are based on [YOLOv8](https://github.com/ultralytics/ultralytics)

Diff for: examples/vision/detection/yolov8/README_CN.md

+29
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
[English](README.md) | 简体中文
2+
# YOLOv8准备部署模型
3+
4+
- YOLOv8部署模型实现来自[YOLOv8](https://github.com/ultralytics/ultralytics),和[基于COCO的预训练模型](https://github.com/ultralytics/ultralytics)
5+
- (1)[官方库](https://github.com/ultralytics/ultralytics)提供的*.onnx可直接进行部署;
6+
- (2)开发者基于自己数据训练的YOLOv8模型,可使用[YOLOv8](https://github.com/ultralytics/ultralytics)中的`export.py`导出ONNX文件后,完成部署。
7+
8+
9+
## 下载预训练ONNX模型
10+
11+
为了方便开发者的测试,下面提供了YOLOv8导出的各系列模型,开发者可直接下载使用。(下表中模型的精度来源于源官方库)
12+
| 模型 | 大小 | 精度 | 备注 |
13+
|:---------------------------------------------------------------- |:----- |:----- |:---- |
14+
| [YOLOv8n](https://bj.bcebos.com/paddlehub/fastdeploy/yolov8n.onnx) | 12.1MB | 37.3% | This model file is sourced from [YOLOv8](https://github.com/ultralytics/ultralytics),GPL-3.0 License |
15+
| [YOLOv8s](https://bj.bcebos.com/paddlehub/fastdeploy/yolov8s.onnx) | 42.6MB | 44.9% | This model file is sourced from [YOLOv8](https://github.com/ultralytics/ultralytics),GPL-3.0 License |
16+
| [YOLOv8m](https://bj.bcebos.com/paddlehub/fastdeploy/yolov8m.onnx) | 98.8MB | 50.2% | This model file is sourced from [YOLOv8](https://github.com/ultralytics/ultralytics),GPL-3.0 License |
17+
| [YOLOv8l](https://bj.bcebos.com/paddlehub/fastdeploy/yolov8l.onnx) | 166.7MB | 52.9% | This model file is sourced from [YOLOv8](https://github.com/ultralytics/ultralytics),GPL-3.0 License |
18+
| [YOLOv8x](https://bj.bcebos.com/paddlehub/fastdeploy/yolov8x.onnx) | 260.3MB | 53.9% | This model file is sourced from [YOLOv8](https://github.com/ultralytics/ultralytics),GPL-3.0 License |
19+
20+
21+
## 详细部署文档
22+
23+
- [Python部署](python)
24+
- [C++部署](cpp)
25+
- [服务化部署](serving)
26+
27+
## 版本说明
28+
29+
- 本版本文档和代码基于[YOLOv8](https://github.com/ultralytics/ultralytics) 编写

Diff for: examples/vision/detection/yolov8/cpp/CMakeLists.txt

+14
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
PROJECT(infer_demo C CXX)
2+
CMAKE_MINIMUM_REQUIRED (VERSION 3.10)
3+
4+
# Specify the fastdeploy library path after downloading and decompression
5+
option(FASTDEPLOY_INSTALL_DIR "Path of downloaded fastdeploy sdk.")
6+
7+
include(${FASTDEPLOY_INSTALL_DIR}/FastDeploy.cmake)
8+
9+
# Add FastDeploy dependent header files
10+
include_directories(${FASTDEPLOY_INCS})
11+
12+
add_executable(infer_demo ${PROJECT_SOURCE_DIR}/infer.cc)
13+
# Add FastDeploy library dependencies
14+
target_link_libraries(infer_demo ${FASTDEPLOY_LIBS})

Diff for: examples/vision/detection/yolov8/cpp/README_CN.md

+90
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,90 @@
1+
[English](README.md) | 简体中文
2+
# YOLOv8 C++部署示例
3+
4+
本目录下提供`infer.cc`快速完成YOLOv8在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
5+
6+
在部署前,需确认以下两个步骤
7+
8+
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
9+
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
10+
11+
以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本1.0.3以上(x.x.x>=1.0.3)
12+
13+
```bash
14+
mkdir build
15+
cd build
16+
# 下载 FastDeploy 预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
17+
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
18+
tar xvf fastdeploy-linux-x64-x.x.x.tgz
19+
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
20+
make -j
21+
22+
# 1. 下载官方转换好的 YOLOv8 ONNX 模型文件和测试图片
23+
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov8s.onnx
24+
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
25+
26+
# CPU推理
27+
./infer_demo yolov8s.onnx 000000014439.jpg 0
28+
# GPU推理
29+
./infer_demo yolov8s.onnx 000000014439.jpg 1
30+
# GPU上TensorRT推理
31+
./infer_demo yolov8s.onnx 000000014439.jpg 2
32+
```
33+
运行完成可视化结果如下图所示
34+
35+
<img width="640" src="https://user-images.githubusercontent.com/67993288/184309358-d803347a-8981-44b6-b589-4608021ad0f4.jpg">
36+
37+
以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
38+
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
39+
40+
如果用户使用华为昇腾NPU部署, 请参考以下方式在部署前初始化部署环境:
41+
- [如何使用华为昇腾NPU部署](../../../../../docs/cn/faq/use_sdk_on_ascend.md)
42+
43+
## YOLOv8 C++接口
44+
45+
### YOLOv8类
46+
47+
```c++
48+
fastdeploy::vision::detection::YOLOv8(
49+
const string& model_file,
50+
const string& params_file = "",
51+
const RuntimeOption& runtime_option = RuntimeOption(),
52+
const ModelFormat& model_format = ModelFormat::ONNX)
53+
```
54+
55+
YOLOv8模型加载和初始化,其中model_file为导出的ONNX模型格式。
56+
57+
**参数**
58+
59+
> * **model_file**(str): 模型文件路径
60+
> * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可
61+
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
62+
> * **model_format**(ModelFormat): 模型格式,默认为ONNX格式
63+
64+
#### Predict函数
65+
66+
> ```c++
67+
> YOLOv8::Predict(cv::Mat* im, DetectionResult* result)
68+
> ```
69+
>
70+
> 模型预测接口,输入图像直接输出检测结果。
71+
>
72+
> **参数**
73+
>
74+
> > * **im**: 输入图像,注意需为HWC,BGR格式
75+
> > * **result**: 检测结果,包括检测框,各个框的置信度, DetectionResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
76+
77+
### 类成员变量
78+
#### 预处理参数
79+
用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
80+
81+
> > * **size**(vector&lt;int&gt;): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640]
82+
> > * **padding_value**(vector&lt;float&gt;): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114]
83+
> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=ture` 表示不使用填充的方式,默认值为`is_no_pad=false`
84+
> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=false`
85+
> > * **stride**(int): 配合`stris_mini_pad`成员变量使用, 默认值为`stride=32`
86+
87+
- [模型介绍](../../)
88+
- [Python部署](../python)
89+
- [视觉模型预测结果](../../../../../docs/api/vision_results/)
90+
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)

0 commit comments

Comments
 (0)