You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/en/build_and_install/huawei_ascend.md
+9-3Lines changed: 9 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -117,6 +117,12 @@ In end-to-end model inference, the pre-processing and post-processing phases are
117
117
118
118
119
119
## Deployment demo reference
120
-
- Deploying PaddleClas Classification Model on Huawei Ascend NPU using C++ please refer to: [PaddleClas Huawei Ascend NPU C++ Deployment Example](../../../examples/vision/classification/paddleclas/cpp/README.md)
121
-
122
-
- Deploying PaddleClas classification model on Huawei Ascend NPU using Python please refer to: [PaddleClas Huawei Ascend NPU Python Deployment Example](../../../examples/vision/classification/paddleclas/python/README.md)
120
+
| Model | C++ Example | Python Example |
121
+
| :-----------| :-------- | :--------------- |
122
+
| PaddleClas |[Ascend NPU C++ Example](../../../examples/vision/classification/paddleclas/cpp/README.md)|[Ascend NPU Python Example](../../../examples/vision/classification/paddleclas/python/README.md)|
123
+
| PaddleDetection |[Ascend NPU C++ Example](../../../examples/vision/detection/paddledetection/cpp/README.md)|[Ascend NPU Python Example](../../../examples/vision/detection/paddledetection/python/README.md)|
124
+
| PaddleSeg |[Ascend NPU C++ Example](../../../examples/vision/segmentation/paddleseg/cpp/README.md)|[Ascend NPU Python Example](../../../examples//vision/segmentation/paddleseg/python/README.md)|
125
+
| PaddleOCR |[Ascend NPU C++ Example](../../../examples/vision/ocr/PP-OCRv3/cpp/README.md)|[Ascend NPU Python Example](../../../examples/vision//ocr/PP-OCRv3/python/README.md)|
126
+
| Yolov5 |[Ascend NPU C++ Example](../../../examples/vision/detection/yolov5/cpp/README.md)|[Ascend NPU Python Example](../../../examples/vision/detection/yolov5/python/README.md)|
127
+
| Yolov6 |[Ascend NPU C++ Example](../../../examples/vision/detection/yolov6/cpp/README.md)|[Ascend NPU Python Example](../../../examples/vision/detection/yolov6/python/README.md)|
128
+
| Yolov7 |[Ascend NPU C++ Example](../../../examples/vision/detection/yolov7/cpp/README.md)|[Ascend NPU Python Example](../../../examples/vision/detection/yolov7/python/README.md)|
Copy file name to clipboardExpand all lines: examples/vision/detection/paddledetection/cpp/README.md
+10-6Lines changed: 10 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
English | [简体中文](README_CN.md)
2
2
# PaddleDetection C++ Deployment Example
3
3
4
-
This directory provides examples that `infer_xxx.cc` fast finishes the deployment of PaddleDetection models, including PPYOLOE/PicoDet/YOLOX/YOLOv3/PPYOLO/FasterRCNN/YOLOv5/YOLOv6/YOLOv7/RTMDet on CPU/GPU and GPU accelerated by TensorRT.
4
+
This directory provides examples that `infer_xxx.cc` fast finishes the deployment of PaddleDetection models, including PPYOLOE/PicoDet/YOLOX/YOLOv3/PPYOLO/FasterRCNN/YOLOv5/YOLOv6/YOLOv7/RTMDet on CPU/GPU and GPU accelerated by TensorRT.
5
5
6
6
Before deployment, two steps require confirmation
7
7
@@ -15,13 +15,13 @@ ppyoloe is taken as an example for inference deployment
15
15
16
16
mkdir build
17
17
cd build
18
-
# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
18
+
# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
39
43
-[How to use FastDeploy C++ SDK in Windows](../../../../../docs/en/faq/use_sdk_on_windows.md)
40
44
41
-
## PaddleDetection C++ Interface
45
+
## PaddleDetection C++ Interface
42
46
43
47
### Model Class
44
48
@@ -56,7 +60,7 @@ Loading and initializing PaddleDetection PPYOLOE model, where the format of mode
56
60
57
61
**Parameter**
58
62
59
-
> * **model_file**(str): Model file path
63
+
> * **model_file**(str): Model file path
60
64
> * **params_file**(str): Parameter file path
61
65
> * **config_file**(str): • Configuration file path, which is the deployment yaml file exported by PaddleDetection
62
66
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
@@ -73,7 +77,7 @@ Loading and initializing PaddleDetection PPYOLOE model, where the format of mode
73
77
> **Parameter**
74
78
>
75
79
> > * **im**: Input images in HWC or BGR format
76
-
> > * **result**: Detection result, including detection box and confidence of each box. Refer to [Vision Model Prediction Result](../../../../../docs/api/vision_results/) for DetectionResult
80
+
> > * **result**: Detection result, including detection box and confidence of each box. Refer to [Vision Model Prediction Result](../../../../../docs/api/vision_results/) for DetectionResult
Copy file name to clipboardExpand all lines: examples/vision/detection/paddledetection/python/README.md
+8-4Lines changed: 8 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,11 +9,11 @@ Before deployment, two steps require confirmation.
9
9
This directory provides examples that `infer_xxx.py` fast finishes the deployment of PPYOLOE/PicoDet models on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
# TensorRT inference on GPU (Attention: It is somewhat time-consuming for the operation of model serialization when running TensorRT inference for the first time. Please be patient.)
@@ -52,7 +54,7 @@ The visualized result after running is as follows
52
54
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
53
55
-[How to use FastDeploy C++ SDK in Windows](../../../../../docs/en/faq/use_sdk_on_windows.md)
54
56
55
-
## YOLOv7 C++ Interface
57
+
## YOLOv7 C++ Interface
56
58
57
59
### YOLOv7 Class
58
60
@@ -68,7 +70,7 @@ YOLOv7 model loading and initialization, among which model_file is the exported
68
70
69
71
**Parameter**
70
72
71
-
> * **model_file**(str): Model file path
73
+
> * **model_file**(str): Model file path
72
74
> * **params_file**(str): Parameter file path. Merely passing an empty string when the model is in ONNX format
73
75
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
74
76
> * **model_format**(ModelFormat): Model format. ONNX format by default
@@ -86,7 +88,7 @@ YOLOv7 model loading and initialization, among which model_file is the exported
86
88
> **Parameter**
87
89
>
88
90
> > * **im**: Input images in HWC or BGR format
89
-
> > * **result**: Detection results, including detection box and confidence of each box. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for DetectionResult
91
+
> > * **result**: Detection results, including detection box and confidence of each box. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for DetectionResult
90
92
> > * **conf_threshold**: Filtering threshold of detection box confidence
91
93
> > * **nms_iou_threshold**: iou threshold during NMS processing
0 commit comments