Skip to content

Commit d2b302a

Browse files
authored
Update FER quantized model and fix document (#129)
* update document and fix bugs * re-quantized model to per_tensor mode * update KV3-NPU benchmark result
1 parent b521338 commit d2b302a

File tree

5 files changed

+19
-7
lines changed

5 files changed

+19
-7
lines changed

README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ Guidelines:
1919
| ------------------------------------------------------- | ----------------------------- | ---------- | -------------- | ------------ | --------------- | ------------ | ----------- |
2020
| [YuNet](./models/face_detection_yunet) | Face Detection | 160x120 | 1.45 | 6.22 | 12.18 | 4.04 | 86.69 |
2121
| [SFace](./models/face_recognition_sface) | Face Recognition | 112x112 | 8.65 | 99.20 | 24.88 | 46.25 | --- |
22-
| [FER](./models/facial_expression_recognition/) | Facial Expression Recognition | 112x112 | 4.43 | 49.86 | 31.07 | 108.53\* | --- |
22+
| [FER](./models/facial_expression_recognition/) | Facial Expression Recognition | 112x112 | 4.43 | 49.86 | 31.07 | 29.80 | --- |
2323
| [LPD-YuNet](./models/license_plate_detection_yunet/) | License Plate Detection | 320x240 | --- | 168.03 | 56.12 | 29.53 | --- |
2424
| [YOLOX](./models/object_detection_yolox/) | Object Detection | 640x640 | 176.68 | 1496.70 | 388.95 | 420.98 | --- |
2525
| [NanoDet](./models/object_detection_nanodet/) | Object Detection | 416x416 | 157.91 | 220.36 | 64.94 | 116.64 | --- |
@@ -63,7 +63,7 @@ Some examples are listed below. You can find more in the directory of each model
6363

6464
![largest selfie](./models/face_detection_yunet/examples/largest_selfie.jpg)
6565

66-
### Facial Expression Recognition with Progressive Teacher(./models/facial_expression_recognition/)
66+
### Facial Expression Recognition with [Progressive Teacher](./models/facial_expression_recognition/)
6767

6868
![fer demo](./models/facial_expression_recognition/examples/selfie.jpg)
6969

models/facial_expression_recognition/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ Progressive Teacher: [Boosting Facial Expression Recognition by A Semi-Supervise
66
Note:
77
- Progressive Teacher is contributed by [Jing Jiang](https://scholar.google.com/citations?user=OCwcfAwAAAAJ&hl=zh-CN).
88
- [MobileFaceNet](https://link.springer.com/chapter/10.1007/978-3-319-97909-0_46) is used as the backbone and the model is able to classify seven basic facial expressions (angry, disgust, fearful, happy, neutral, sad, surprised).
9-
- [facial_expression_recognition_mobilefacenet_2022july.onnx](https://github.com/opencv/opencv_zoo/raw/master/models/facial_expression_recognition/facial_expression_recognition_mobilefacenet_2022july.onnx) is implemented thanks to [Chengrui Wang](https://github.com/opencv).
9+
- [facial_expression_recognition_mobilefacenet_2022july.onnx](https://github.com/opencv/opencv_zoo/raw/master/models/facial_expression_recognition/facial_expression_recognition_mobilefacenet_2022july.onnx) is implemented thanks to [Chengrui Wang](https://github.com/crywang).
1010

1111
Results of accuracy evaluation on [RAF-DB](http://whdeng.cn/RAF/model1.html).
1212

models/facial_expression_recognition/demo.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ def str2bool(v):
3535

3636
parser = argparse.ArgumentParser(description='Facial Expression Recognition')
3737
parser.add_argument('--input', '-i', type=str, help='Path to the input image. Omit for using default camera.')
38-
parser.add_argument('--model', '-fm', type=str, default='./facial_expression_recognition_mobilefacenet_2022july.onnx', help='Path to the facial expression recognition model.')
38+
parser.add_argument('--model', '-m', type=str, default='./facial_expression_recognition_mobilefacenet_2022july.onnx', help='Path to the facial expression recognition model.')
3939
parser.add_argument('--backend', '-b', type=int, default=backends[0], help=help_msg_backends.format(*backends))
4040
parser.add_argument('--target', '-t', type=int, default=targets[0], help=help_msg_targets.format(*targets))
4141
parser.add_argument('--save', '-s', type=str, default=False, help='Set true to save results. This flag is invalid when using camera.')
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
11
version https://git-lfs.github.com/spec/v1
2-
oid sha256:541597ca330e0e3babe883d0fa6ab121b0e3da65c9cc099c05ff274b3106a658
3-
size 1340132
2+
oid sha256:f0d7093aff10e2638c734c5f18a6a7eabd2b9239b20bdb9b8090865a6f69a1ed
3+
size 1364007

tools/quantize/inc_configs/fer.yaml

+13-1
Original file line numberDiff line numberDiff line change
@@ -17,9 +17,21 @@ quantization: # optional. tuning constrai
1717
dtype: float32
1818
label: True
1919

20+
model_wise: # optional. tuning constraints on model-wise for advance user to reduce tuning space.
21+
weight:
22+
granularity: per_tensor
23+
scheme: asym
24+
dtype: int8
25+
algorithm: minmax
26+
activation:
27+
granularity: per_tensor
28+
scheme: asym
29+
dtype: int8
30+
algorithm: minmax
31+
2032
tuning:
2133
accuracy_criterion:
22-
relative: 0.01 # optional. default value is relative, other value is absolute. this example allows relative accuracy loss: 1%.
34+
relative: 0.02 # optional. default value is relative, other value is absolute. this example allows relative accuracy loss: 1%.
2335
exit_policy:
2436
timeout: 0 # optional. tuning timeout (seconds). default value is 0 which means early stop. combine with max_trials field to decide when to exit.
2537
max_trials: 50 # optional. max tune times. default value is 100. combine with timeout field to decide when to exit.

0 commit comments

Comments
 (0)