Skip to content

Commit 8bdc41b

Browse files
authored
Adding eDifFIQA(T) a light-weight model for face image quality assessment. (#263)
* Adding demo for eDifFIQA * ReadMe update * Removed invalid copyright from ediffiqa.py * Replaced skimage dependency with relevant OpenCV function in demo.py * Increased text size in image visualization * Added clarification on how the obtained quality score differes between high and low quality images * Removed unused import in ediffiqa.py
1 parent dcd849d commit 8bdc41b

File tree

7 files changed

+658
-0
lines changed

7 files changed

+658
-0
lines changed

models/face_image_quality_assessment_ediffiqa/LICENSE

+395
Large diffs are not rendered by default.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,54 @@
1+
# eDifFIQA(T)
2+
3+
eDifFIQA(T) is a light-weight version of the models presented in the paper [eDifFIQA: Towards Efficient Face Image Quality Assessment based on Denoising Diffusion Probabilistic Models](https://ieeexplore.ieee.org/document/10468647), it achieves state-of-the-art results in the field of face image quality assessment.
4+
5+
Notes:
6+
7+
- The original implementation can be found [here](https://github.com/LSIbabnikz/eDifFIQA).
8+
- The included model combines a pretrained MobileFaceNet backbone, with a quality regression head trained using the proceedure presented in the original paper.
9+
- The model predicts quality scores of aligned face samples, where a higher predicted score corresponds to a higher quality of the input sample.
10+
11+
- In the figure below we show the quality distribution on two distinct datasets: LFW[[1]](#1) and XQLFW[[2]](#2). The LFW dataset contains images of relatively high quality, whereas the XQLFW dataset contains images of variable quality. There is a clear difference between the two distributions, with high quality images from the LFW dataset receiving quality scores higher than 0.5, while the mixed images from XQLFW receive much lower quality scores on average.
12+
13+
14+
![qualityDist](./quality_distribution.png)
15+
16+
17+
<a id="1">[1]</a>
18+
B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller
19+
“Labeled Faces in the Wild: A Database for Studying Face Recognition in Unconstrained Environments”
20+
University of Massachusetts, Amherst, Tech. Rep. 07-49,
21+
October 2007.
22+
23+
<a id="2">[2]</a>
24+
M. Knoche, S. Hormann, and G. Rigoll
25+
“Cross-Quality LFW: A Database for Analyzing Cross-Resolution Image Face Recognition in Unconstrained Environments,” in Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition (FG), 2021, pp. 1–5.
26+
27+
28+
29+
## Demo
30+
31+
***NOTE***: The provided demo uses [../face_detection_yunet](../face_detection_yunet) for face detection, in order to properly align the face samples, while the original implementation uses a RetinaFace(ResNet50) model, which might cause some differences between the results of the two implementations.
32+
33+
To try the demo run the following commands:
34+
35+
36+
```shell
37+
# Assess the quality of 'image1'
38+
python demo.py -i /path/to/image1
39+
40+
# Output all the arguments of the demo
41+
python demo.py --help
42+
```
43+
44+
45+
### Example outputs
46+
47+
![ediffiqaDemo](./example_outputs/demo.jpg)
48+
49+
The demo outputs the quality of the sample via terminal (print) and via image in __results.jpg__.
50+
51+
## License
52+
53+
All files in this directory are licensed under [CC-BY-4.0](./LICENSE).
54+
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,155 @@
1+
# This file is part of OpenCV Zoo project.
2+
# It is subject to the license terms in the LICENSE file found in the same directory.
3+
4+
5+
import sys
6+
import argparse
7+
8+
import numpy as np
9+
import cv2 as cv
10+
11+
# Check OpenCV version
12+
opencv_python_version = lambda str_version: tuple(map(int, (str_version.split("."))))
13+
assert opencv_python_version(cv.__version__) >= opencv_python_version("4.10.0"), \
14+
"Please install latest opencv-python for benchmark: python3 -m pip install --upgrade opencv-python"
15+
16+
sys.path.append('../face_detection_yunet')
17+
from yunet import YuNet
18+
19+
from ediffiqa import eDifFIQA
20+
21+
# Valid combinations of backends and targets
22+
backend_target_pairs = [
23+
[cv.dnn.DNN_BACKEND_OPENCV, cv.dnn.DNN_TARGET_CPU],
24+
[cv.dnn.DNN_BACKEND_CUDA, cv.dnn.DNN_TARGET_CUDA],
25+
[cv.dnn.DNN_BACKEND_CUDA, cv.dnn.DNN_TARGET_CUDA_FP16],
26+
[cv.dnn.DNN_BACKEND_TIMVX, cv.dnn.DNN_TARGET_NPU],
27+
[cv.dnn.DNN_BACKEND_CANN, cv.dnn.DNN_TARGET_NPU]
28+
]
29+
30+
REFERENCE_FACIAL_POINTS = [
31+
[38.2946 , 51.6963 ],
32+
[73.5318 , 51.5014 ],
33+
[56.0252 , 71.7366 ],
34+
[41.5493 , 92.3655 ],
35+
[70.729904, 92.2041 ]
36+
]
37+
38+
parser = argparse.ArgumentParser(description='eDifFIQA: Towards Efficient Face Image Quality Assessment based on Denoising Diffusion Probabilistic Models (https://github.com/LSIbabnikz/eDifFIQA).')
39+
parser.add_argument('--input', '-i', type=str, default='./sample_image.jpg',
40+
help='Usage: Set input to a certain image, defaults to "./sample_image.jpg".')
41+
parser.add_argument('--backend_target', '-bt', type=int, default=0,
42+
help='''Choose one of the backend-target pair to run this demo:
43+
{:d}: (default) OpenCV implementation + CPU,
44+
{:d}: CUDA + GPU (CUDA),
45+
{:d}: CUDA + GPU (CUDA FP16),
46+
{:d}: TIM-VX + NPU,
47+
{:d}: CANN + NPU
48+
'''.format(*[x for x in range(len(backend_target_pairs))]))
49+
50+
ediffiqa_parser = parser.add_argument_group("eDifFIQA", " Parameters of eDifFIQA - For face image quality assessment ")
51+
ediffiqa_parser.add_argument('--model_q', '-mq', type=str, default='ediffiqa_tiny_jun2024.onnx',
52+
help="Usage: Set model type, defaults to 'ediffiqa_tiny_jun2024.onnx'.")
53+
54+
yunet_parser = parser.add_argument_group("YuNet", " Parameters of YuNet - For face detection ")
55+
yunet_parser.add_argument('--model_d', '-md', type=str, default='../face_detection_yunet/face_detection_yunet_2023mar.onnx',
56+
help="Usage: Set model type, defaults to '../face_detection_yunet/face_detection_yunet_2023mar.onnx'.")
57+
yunet_parser.add_argument('--conf_threshold', type=float, default=0.9,
58+
help='Usage: Set the minimum needed confidence for the model to identify a face, defauts to 0.9. Smaller values may result in faster detection, but will limit accuracy. Filter out faces of confidence < conf_threshold.')
59+
yunet_parser.add_argument('--nms_threshold', type=float, default=0.3,
60+
help='Usage: Suppress bounding boxes of iou >= nms_threshold. Default = 0.3.')
61+
yunet_parser.add_argument('--top_k', type=int, default=5000,
62+
help='Usage: Keep top_k bounding boxes before NMS.')
63+
args = parser.parse_args()
64+
65+
66+
def visualize(image, results):
67+
output = image.copy()
68+
cv.putText(output, f"{results:.3f}", (0, 20), cv.FONT_HERSHEY_DUPLEX, .8, (0, 0, 255))
69+
70+
return output
71+
72+
73+
def align_image(image, detection_data):
74+
""" Performs face alignment on given image using the provided face landmarks (keypoints)
75+
76+
Args:
77+
image (np.array): Unaligned face image
78+
detection_data (np.array): Detection data provided by YuNet
79+
80+
Returns:
81+
np.array: Aligned image
82+
"""
83+
84+
reference_pts = REFERENCE_FACIAL_POINTS
85+
86+
ref_pts = np.float32(reference_pts)
87+
ref_pts_shp = ref_pts.shape
88+
89+
if ref_pts_shp[0] == 2:
90+
ref_pts = ref_pts.T
91+
92+
# Get source keypoints from YuNet detection data
93+
src_pts = np.float32(detection_data[0][4:-1]).reshape(5,2)
94+
src_pts_shp = src_pts.shape
95+
96+
if src_pts_shp[0] == 2:
97+
src_pts = src_pts.T
98+
99+
tfm, _ = cv.estimateAffinePartial2D(src_pts, ref_pts, method=cv.LMEDS)
100+
101+
face_img = cv.warpAffine(image, tfm, (112, 112))
102+
103+
return face_img
104+
105+
106+
if __name__ == '__main__':
107+
108+
backend_id = backend_target_pairs[args.backend_target][0]
109+
target_id = backend_target_pairs[args.backend_target][1]
110+
111+
# Instantiate eDifFIQA(T) (quality assesment)
112+
model_quality = eDifFIQA(
113+
modelPath=args.model_q,
114+
inputSize=[112, 112],
115+
)
116+
model_quality.setBackendAndTarget(
117+
backendId=backend_id,
118+
targetId=target_id
119+
)
120+
121+
# Instantiate YuNet (face detection)
122+
model_detect = YuNet(
123+
modelPath=args.model_d,
124+
inputSize=[320, 320],
125+
confThreshold=args.conf_threshold,
126+
nmsThreshold=args.nms_threshold,
127+
topK=args.top_k,
128+
backendId=backend_id,
129+
targetId=target_id
130+
)
131+
132+
# If input is an image
133+
image = cv.imread(args.input)
134+
h, w, _ = image.shape
135+
136+
# Face Detection
137+
model_detect.setInputSize([w, h])
138+
results_detect = model_detect.infer(image)
139+
140+
assert results_detect.size != 0, f" Face could not be detected in: {args.input}. "
141+
142+
# Face Alignment
143+
aligned_image = align_image(image, results_detect)
144+
145+
# Quality Assesment
146+
quality = model_quality.infer(aligned_image)
147+
quality = np.squeeze(quality).item()
148+
149+
viz_image = visualize(aligned_image, quality)
150+
151+
print(f" Quality score of {args.input}: {quality:.3f} ")
152+
153+
print(f" Saving visualization to results.jpg. ")
154+
cv.imwrite('results.jpg', viz_image)
155+
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,45 @@
1+
# This file is part of OpenCV Zoo project.
2+
# It is subject to the license terms in the LICENSE file found in the same directory.
3+
4+
import numpy as np
5+
import cv2 as cv
6+
7+
8+
class eDifFIQA:
9+
10+
def __init__(self, modelPath, inputSize=[112, 112]):
11+
self.modelPath = modelPath
12+
self.inputSize = tuple(inputSize) # [w, h]
13+
14+
self.model = cv.dnn.readNetFromONNX(self.modelPath)
15+
16+
@property
17+
def name(self):
18+
return self.__class__.__name__
19+
20+
def setBackendAndTarget(self, backendId, targetId):
21+
self._backendId = backendId
22+
self._targetId = targetId
23+
self.model.setPreferableBackend(self._backendId)
24+
self.model.setPreferableTarget(self._targetId)
25+
26+
def infer(self, image):
27+
# Preprocess image
28+
image = self._preprocess(image)
29+
# Forward
30+
self.model.setInput(image)
31+
quality_score = self.model.forward()
32+
33+
return quality_score
34+
35+
def _preprocess(self, image: cv.Mat):
36+
# Change image from BGR to RGB
37+
image = cv.cvtColor(image, cv.COLOR_BGR2RGB)
38+
# Resize to (112, 112)
39+
image = cv.resize(image, self.inputSize)
40+
# Scale to [0, 1] and normalize by mean=0.5, std=0.5
41+
image = ((image / 255) - 0.5) / 0.5
42+
# Move channel axis
43+
image = np.moveaxis(image[None, ...], -1, 1)
44+
45+
return image
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
version https://git-lfs.github.com/spec/v1
2+
oid sha256:9426c899cc0f01665240cb7d9e7f98e18e24e456c178326c771a43da289bfc6a
3+
size 7272678
Loading
Loading

0 commit comments

Comments
 (0)