Skip to content

Commit b2ed849

Browse files
authored
Merge pull request #1184 from serengil/feat-task-1204-centerface
Feat task 1204 centerface
2 parents 10b2e18 + d52ab37 commit b2ed849

12 files changed

+268
-30
lines changed

README.md

+12-9
Original file line numberDiff line numberDiff line change
@@ -194,9 +194,9 @@ Age model got ± 4.65 MAE; gender model got 97.44% accuracy, 96.29% precision an
194194

195195
**Face Detectors** - [`Demo`](https://youtu.be/GZ2p2hj2H5k)
196196

197-
Face detection and alignment are important early stages of a modern face recognition pipeline. Experiments show that just alignment increases the face recognition accuracy almost 1%. [`OpenCV`](https://sefiks.com/2020/02/23/face-alignment-for-face-recognition-in-python-within-opencv/), [`SSD`](https://sefiks.com/2020/08/25/deep-face-detection-with-opencv-in-python/), [`Dlib`](https://sefiks.com/2020/07/11/face-recognition-with-dlib-in-python/), [`MTCNN`](https://sefiks.com/2020/09/09/deep-face-detection-with-mtcnn-in-python/), [`Faster MTCNN`](https://github.com/timesler/facenet-pytorch), [`RetinaFace`](https://sefiks.com/2021/04/27/deep-face-detection-with-retinaface-in-python/), [`MediaPipe`](https://sefiks.com/2022/01/14/deep-face-detection-with-mediapipe/), [`YOLOv8 Face`](https://github.com/derronqi/yolov8-face) and [`YuNet`](https://github.com/ShiqiYu/libfacedetection) detectors are wrapped in deepface.
197+
Face detection and alignment are important early stages of a modern face recognition pipeline. Experiments show that just alignment increases the face recognition accuracy almost 1%. [`OpenCV`](https://sefiks.com/2020/02/23/face-alignment-for-face-recognition-in-python-within-opencv/), [`Ssd`](https://sefiks.com/2020/08/25/deep-face-detection-with-opencv-in-python/), [`Dlib`](https://sefiks.com/2020/07/11/face-recognition-with-dlib-in-python/), [`MtCnn`](https://sefiks.com/2020/09/09/deep-face-detection-with-mtcnn-in-python/), `Faster MTCNN`, [`RetinaFace`](https://sefiks.com/2021/04/27/deep-face-detection-with-retinaface-in-python/), [`MediaPipe`](https://sefiks.com/2022/01/14/deep-face-detection-with-mediapipe/), `Yolo`, `YuNet` and `CenterFace` detectors are wrapped in deepface.
198198

199-
<p align="center"><img src="https://raw.githubusercontent.com/serengil/deepface/master/icon/detector-portfolio-v5.jpg" width="95%" height="95%"></p>
199+
<p align="center"><img src="https://raw.githubusercontent.com/serengil/deepface/master/icon/detector-portfolio-v6.jpg" width="95%" height="95%"></p>
200200

201201
All deepface functions accept an optional detector backend input argument. You can switch among those detectors with this argument. OpenCV is the default detector.
202202

@@ -206,11 +206,12 @@ backends = [
206206
'ssd',
207207
'dlib',
208208
'mtcnn',
209+
'fastmtcnn',
209210
'retinaface',
210211
'mediapipe',
211212
'yolov8',
212213
'yunet',
213-
'fastmtcnn',
214+
'centerface',
214215
]
215216

216217
#face verification
@@ -317,9 +318,7 @@ You can also run these commands if you are running deepface with docker. Please
317318

318319
## FAQ and Troubleshooting
319320

320-
If you believe you have identified a bug or encountered a limitation in DeepFace that is not covered in the [existing issues](https://github.com/serengil/deepface/issues) or [closed issues](https://github.com/serengil/deepface/issues?q=is%3Aissue+is%3Aclosed), kindly open a new issue. Ensure that your submission includes clear and detailed reproduction steps, such as your Python version, your DeepFace version (provided by `DeepFace.__version__`), versions of dependent packages (provided by pip freeze), specifics of any exception messages, details about how you are calling DeepFace, and the input image(s) you are using.
321-
322-
Additionally, it is possible to encounter issues due to recently released dependencies, primarily Python itself or TensorFlow. It is recommended to synchronize your dependencies with the versions [specified in my environment](https://github.com/serengil/deepface/blob/master/requirements_local) and [same python version](https://github.com/serengil/deepface/blob/master/Dockerfile#L2) not to have potential compatibility issues.
321+
If you believe you have identified a bug or encountered a limitation in DeepFace that is not covered in the [existing issues](https://github.com/serengil/deepface/issues) or [closed issues](https://github.com/serengil/deepface/issues?q=is%3Aissue+is%3Aclosed), kindly open a new issue. Ensure that your submission includes clear and detailed reproduction steps, such as your Python version, your DeepFace version, versions of dependent packages (provided by pip freeze), specifics of any exception messages, details about how you are calling DeepFace, and the input image(s) you are using.
323322

324323
## Contribution
325324

@@ -351,7 +350,7 @@ If you use deepface in your research for facial recogntion purposes, please cite
351350
pages = {23-27},
352351
year = {2020},
353352
doi = {10.1109/ASYU50717.2020.9259802},
354-
url = {https://doi.org/10.1109/ASYU50717.2020.9259802},
353+
url = {https://ieeexplore.ieee.org/document/9259802},
355354
organization = {IEEE}
356355
}
357356
```
@@ -366,7 +365,7 @@ If you use deepface in your research for facial attribute analysis purposes such
366365
pages = {1-4},
367366
year = {2021},
368367
doi = {10.1109/ICEET53442.2021.9659697},
369-
url = {https://doi.org/10.1109/ICEET53442.2021.9659697},
368+
url = {https://ieeexplore.ieee.org/document/9659697},
370369
organization = {IEEE}
371370
}
372371
```
@@ -377,6 +376,10 @@ Also, if you use deepface in your GitHub projects, please add `deepface` in the
377376

378377
DeepFace is licensed under the MIT License - see [`LICENSE`](https://github.com/serengil/deepface/blob/master/LICENSE) for more details.
379378

380-
DeepFace wraps some external face recognition models: [VGG-Face](http://www.robots.ox.ac.uk/~vgg/software/vgg_face/), [Facenet](https://github.com/davidsandberg/facenet/blob/master/LICENSE.md), [OpenFace](https://github.com/iwantooxxoox/Keras-OpenFace/blob/master/LICENSE), [DeepFace](https://github.com/swghosh/DeepFace), [DeepID](https://github.com/Ruoyiran/DeepID/blob/master/LICENSE.md), [ArcFace](https://github.com/leondgarse/Keras_insightface/blob/master/LICENSE), [Dlib](https://github.com/davisking/dlib/blob/master/dlib/LICENSE.txt), [SFace](https://github.com/opencv/opencv_zoo/blob/master/models/face_recognition_sface/LICENSE) and [GhostFaceNet](https://github.com/HamadYA/GhostFaceNets/blob/main/LICENSE). Besides, age, gender and race / ethnicity models were trained on the backbone of VGG-Face with transfer learning. Licence types will be inherited if you are going to use those models. Please check the license types of those models for production purposes.
379+
DeepFace wraps some external face recognition models: [VGG-Face](http://www.robots.ox.ac.uk/~vgg/software/vgg_face/), [Facenet](https://github.com/davidsandberg/facenet/blob/master/LICENSE.md), [OpenFace](https://github.com/iwantooxxoox/Keras-OpenFace/blob/master/LICENSE), [DeepFace](https://github.com/swghosh/DeepFace), [DeepID](https://github.com/Ruoyiran/DeepID/blob/master/LICENSE.md), [ArcFace](https://github.com/leondgarse/Keras_insightface/blob/master/LICENSE), [Dlib](https://github.com/davisking/dlib/blob/master/dlib/LICENSE.txt), [SFace](https://github.com/opencv/opencv_zoo/blob/master/models/face_recognition_sface/LICENSE) and [GhostFaceNet](https://github.com/HamadYA/GhostFaceNets/blob/main/LICENSE). Besides, age, gender and race / ethnicity models were trained on the backbone of VGG-Face with transfer learning.
380+
381+
Similarly, DeepFace wraps many face detectors: [OpenCv](https://github.com/opencv/opencv/blob/4.x/LICENSE), [Ssd](https://github.com/opencv/opencv/blob/master/LICENSE), [Dlib](https://github.com/davisking/dlib/blob/master/LICENSE.txt), [MtCnn](https://github.com/ipazc/mtcnn/blob/master/LICENSE), [Fast MtCnn](https://github.com/timesler/facenet-pytorch/blob/master/LICENSE.md), [RetinaFace](https://github.com/serengil/retinaface/blob/master/LICENSE), [MediaPipe](https://github.com/google/mediapipe/blob/master/LICENSE), [YuNet](https://github.com/ShiqiYu/libfacedetection/blob/master/LICENSE), [Yolo](https://github.com/derronqi/yolov8-face/blob/main/LICENSE) and [CenterFace](https://github.com/Star-Clouds/CenterFace/blob/master/LICENSE).
382+
383+
Licence types will be inherited if you are going to use those models. Please check the license types of those models for production purposes.
381384

382385
DeepFace [logo](https://thenounproject.com/term/face-recognition/2965879/) is created by [Adrien Coquet](https://thenounproject.com/coquet_adrien/) and it is licensed under [Creative Commons: By Attribution 3.0 License](https://creativecommons.org/licenses/by/3.0/).

deepface/DeepFace.py

+14-7
Original file line numberDiff line numberDiff line change
@@ -88,7 +88,8 @@ def verify(
8888
OpenFace, DeepFace, DeepID, Dlib, ArcFace, SFace and GhostFaceNet (default is VGG-Face).
8989
9090
detector_backend (string): face detector backend. Options: 'opencv', 'retinaface',
91-
'mtcnn', 'ssd', 'dlib', 'mediapipe', 'yolov8' or 'skip' (default is opencv).
91+
'mtcnn', 'ssd', 'dlib', 'mediapipe', 'yolov8', 'centerface' or 'skip'
92+
(default is opencv).
9293
9394
distance_metric (string): Metric for measuring similarity. Options: 'cosine',
9495
'euclidean', 'euclidean_l2' (default is cosine).
@@ -168,7 +169,8 @@ def analyze(
168169
Set to False to avoid the exception for low-resolution images (default is True).
169170
170171
detector_backend (string): face detector backend. Options: 'opencv', 'retinaface',
171-
'mtcnn', 'ssd', 'dlib', 'mediapipe', 'yolov8' or 'skip' (default is opencv).
172+
'mtcnn', 'ssd', 'dlib', 'mediapipe', 'yolov8', 'centerface' or 'skip'
173+
(default is opencv).
172174
173175
distance_metric (string): Metric for measuring similarity. Options: 'cosine',
174176
'euclidean', 'euclidean_l2' (default is cosine).
@@ -272,7 +274,8 @@ def find(
272274
Set to False to avoid the exception for low-resolution images (default is True).
273275
274276
detector_backend (string): face detector backend. Options: 'opencv', 'retinaface',
275-
'mtcnn', 'ssd', 'dlib', 'mediapipe', 'yolov8' or 'skip' (default is opencv).
277+
'mtcnn', 'ssd', 'dlib', 'mediapipe', 'yolov8', 'centerface' or 'skip'
278+
(default is opencv).
276279
277280
align (boolean): Perform alignment based on the eye positions (default is True).
278281
@@ -348,7 +351,8 @@ def represent(
348351
(default is True).
349352
350353
detector_backend (string): face detector backend. Options: 'opencv', 'retinaface',
351-
'mtcnn', 'ssd', 'dlib', 'mediapipe', 'yolov8' or 'skip' (default is opencv).
354+
'mtcnn', 'ssd', 'dlib', 'mediapipe', 'yolov8', 'centerface' or 'skip'
355+
(default is opencv).
352356
353357
align (boolean): Perform alignment based on the eye positions (default is True).
354358
@@ -406,7 +410,8 @@ def stream(
406410
OpenFace, DeepFace, DeepID, Dlib, ArcFace, SFace and GhostFaceNet (default is VGG-Face).
407411
408412
detector_backend (string): face detector backend. Options: 'opencv', 'retinaface',
409-
'mtcnn', 'ssd', 'dlib', 'mediapipe', 'yolov8' or 'skip' (default is opencv).
413+
'mtcnn', 'ssd', 'dlib', 'mediapipe', 'yolov8', 'centerface' or 'skip'
414+
(default is opencv).
410415
411416
distance_metric (string): Metric for measuring similarity. Options: 'cosine',
412417
'euclidean', 'euclidean_l2' (default is cosine).
@@ -454,7 +459,8 @@ def extract_faces(
454459
as a string, numpy array (BGR), or base64 encoded images.
455460
456461
detector_backend (string): face detector backend. Options: 'opencv', 'retinaface',
457-
'mtcnn', 'ssd', 'dlib', 'mediapipe', 'yolov8' or 'skip' (default is opencv).
462+
'mtcnn', 'ssd', 'dlib', 'mediapipe', 'yolov8', 'centerface' or 'skip'
463+
(default is opencv).
458464
459465
enforce_detection (boolean): If no face is detected in an image, raise an exception.
460466
Set to False to avoid the exception for low-resolution images (default is True).
@@ -520,7 +526,8 @@ def detectFace(
520526
added to resize the image (default is (224, 224)).
521527
522528
detector_backend (string): face detector backend. Options: 'opencv', 'retinaface',
523-
'mtcnn', 'ssd', 'dlib', 'mediapipe', 'yolov8' or 'skip' (default is opencv).
529+
'mtcnn', 'ssd', 'dlib', 'mediapipe', 'yolov8', 'centerface' or 'skip'
530+
(default is opencv).
524531
525532
enforce_detection (boolean): If no face is detected in an image, raise an exception.
526533
Set to False to avoid the exception for low-resolution images (default is True).

deepface/detectors/CenterFace.py

+217
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,217 @@
1+
# built-in dependencies
2+
import os
3+
from typing import List
4+
5+
# 3rd party dependencies
6+
import numpy as np
7+
import cv2
8+
import gdown
9+
10+
# project dependencies
11+
from deepface.commons import folder_utils
12+
from deepface.models.Detector import Detector, FacialAreaRegion
13+
from deepface.commons import logger as log
14+
15+
logger = log.get_singletonish_logger()
16+
17+
# pylint: disable=c-extension-no-member
18+
19+
WEIGHTS_URL = "https://github.com/Star-Clouds/CenterFace/raw/master/models/onnx/centerface.onnx"
20+
21+
22+
class CenterFaceClient(Detector):
23+
def __init__(self):
24+
# BUG: model must be flushed for each call
25+
# self.model = self.build_model()
26+
pass
27+
28+
def build_model(self):
29+
"""
30+
Download pre-trained weights of CenterFace model if necessary and load built model
31+
"""
32+
weights_path = f"{folder_utils.get_deepface_home()}/.deepface/weights/centerface.onnx"
33+
if not os.path.isfile(weights_path):
34+
logger.info(f"Downloading CenterFace weights from {WEIGHTS_URL} to {weights_path}...")
35+
try:
36+
gdown.download(WEIGHTS_URL, weights_path, quiet=False)
37+
except Exception as err:
38+
raise ValueError(
39+
f"Exception while downloading CenterFace weights from {WEIGHTS_URL}."
40+
f"You may consider to download it to {weights_path} manually."
41+
) from err
42+
logger.info(f"CenterFace model is just downloaded to {os.path.basename(weights_path)}")
43+
44+
return CenterFace(weight_path=weights_path)
45+
46+
def detect_faces(self, img: np.ndarray) -> List["FacialAreaRegion"]:
47+
"""
48+
Detect and align face with CenterFace
49+
50+
Args:
51+
img (np.ndarray): pre-loaded image as numpy array
52+
53+
Returns:
54+
results (List[FacialAreaRegion]): A list of FacialAreaRegion objects
55+
"""
56+
resp = []
57+
58+
threshold = float(os.getenv("CENTERFACE_THRESHOLD", "0.80"))
59+
60+
# BUG: model causes problematic results from 2nd call if it is not flushed
61+
# detections, landmarks = self.model.forward(
62+
# img, img.shape[0], img.shape[1], threshold=threshold
63+
# )
64+
detections, landmarks = self.build_model().forward(
65+
img, img.shape[0], img.shape[1], threshold=threshold
66+
)
67+
68+
for i, detection in enumerate(detections):
69+
boxes, confidence = detection[:4], detection[4]
70+
71+
x = boxes[0]
72+
y = boxes[1]
73+
w = boxes[2] - x
74+
h = boxes[3] - y
75+
76+
landmark = landmarks[i]
77+
78+
right_eye = (int(landmark[0]), int(landmark[1]))
79+
left_eye = (int(landmark[2]), int(landmark[3]))
80+
# nose = (int(landmark[4]), int(landmark [5]))
81+
# mouth_right = (int(landmark[6]), int(landmark [7]))
82+
# mouth_left = (int(landmark[8]), int(landmark [9]))
83+
84+
facial_area = FacialAreaRegion(
85+
x=x,
86+
y=y,
87+
w=w,
88+
h=h,
89+
left_eye=left_eye,
90+
right_eye=right_eye,
91+
confidence=min(max(0, float(confidence)), 1.0),
92+
)
93+
resp.append(facial_area)
94+
95+
return resp
96+
97+
98+
class CenterFace:
99+
"""
100+
This class is heavily inspired from
101+
github.com/Star-Clouds/CenterFace/blob/master/prj-python/centerface.py
102+
"""
103+
104+
def __init__(self, weight_path: str):
105+
self.net = cv2.dnn.readNetFromONNX(weight_path)
106+
self.img_h_new, self.img_w_new, self.scale_h, self.scale_w = 0, 0, 0, 0
107+
108+
def forward(self, img, height, width, threshold=0.5):
109+
self.img_h_new, self.img_w_new, self.scale_h, self.scale_w = self.transform(height, width)
110+
return self.inference_opencv(img, threshold)
111+
112+
def inference_opencv(self, img, threshold):
113+
blob = cv2.dnn.blobFromImage(
114+
img,
115+
scalefactor=1.0,
116+
size=(self.img_w_new, self.img_h_new),
117+
mean=(0, 0, 0),
118+
swapRB=True,
119+
crop=False,
120+
)
121+
self.net.setInput(blob)
122+
heatmap, scale, offset, lms = self.net.forward(["537", "538", "539", "540"])
123+
return self.postprocess(heatmap, lms, offset, scale, threshold)
124+
125+
def transform(self, h, w):
126+
img_h_new, img_w_new = int(np.ceil(h / 32) * 32), int(np.ceil(w / 32) * 32)
127+
scale_h, scale_w = img_h_new / h, img_w_new / w
128+
return img_h_new, img_w_new, scale_h, scale_w
129+
130+
def postprocess(self, heatmap, lms, offset, scale, threshold):
131+
dets, lms = self.decode(
132+
heatmap, scale, offset, lms, (self.img_h_new, self.img_w_new), threshold=threshold
133+
)
134+
if len(dets) > 0:
135+
dets[:, 0:4:2], dets[:, 1:4:2] = (
136+
dets[:, 0:4:2] / self.scale_w,
137+
dets[:, 1:4:2] / self.scale_h,
138+
)
139+
lms[:, 0:10:2], lms[:, 1:10:2] = (
140+
lms[:, 0:10:2] / self.scale_w,
141+
lms[:, 1:10:2] / self.scale_h,
142+
)
143+
else:
144+
dets = np.empty(shape=[0, 5], dtype=np.float32)
145+
lms = np.empty(shape=[0, 10], dtype=np.float32)
146+
return dets, lms
147+
148+
def decode(self, heatmap, scale, offset, landmark, size, threshold=0.1):
149+
heatmap = np.squeeze(heatmap)
150+
scale0, scale1 = scale[0, 0, :, :], scale[0, 1, :, :]
151+
offset0, offset1 = offset[0, 0, :, :], offset[0, 1, :, :]
152+
c0, c1 = np.where(heatmap > threshold)
153+
boxes, lms = [], []
154+
if len(c0) > 0:
155+
# pylint:disable=consider-using-enumerate
156+
for i in range(len(c0)):
157+
s0, s1 = np.exp(scale0[c0[i], c1[i]]) * 4, np.exp(scale1[c0[i], c1[i]]) * 4
158+
o0, o1 = offset0[c0[i], c1[i]], offset1[c0[i], c1[i]]
159+
s = heatmap[c0[i], c1[i]]
160+
x1, y1 = max(0, (c1[i] + o1 + 0.5) * 4 - s1 / 2), max(
161+
0, (c0[i] + o0 + 0.5) * 4 - s0 / 2
162+
)
163+
x1, y1 = min(x1, size[1]), min(y1, size[0])
164+
boxes.append([x1, y1, min(x1 + s1, size[1]), min(y1 + s0, size[0]), s])
165+
lm = []
166+
for j in range(5):
167+
lm.append(landmark[0, j * 2 + 1, c0[i], c1[i]] * s1 + x1)
168+
lm.append(landmark[0, j * 2, c0[i], c1[i]] * s0 + y1)
169+
lms.append(lm)
170+
boxes = np.asarray(boxes, dtype=np.float32)
171+
keep = self.nms(boxes[:, :4], boxes[:, 4], 0.3)
172+
boxes = boxes[keep, :]
173+
lms = np.asarray(lms, dtype=np.float32)
174+
lms = lms[keep, :]
175+
return boxes, lms
176+
177+
def nms(self, boxes, scores, nms_thresh):
178+
x1 = boxes[:, 0]
179+
y1 = boxes[:, 1]
180+
x2 = boxes[:, 2]
181+
y2 = boxes[:, 3]
182+
areas = (x2 - x1 + 1) * (y2 - y1 + 1)
183+
order = np.argsort(scores)[::-1]
184+
num_detections = boxes.shape[0]
185+
suppressed = np.zeros((num_detections,), dtype=bool)
186+
187+
keep = []
188+
for _i in range(num_detections):
189+
i = order[_i]
190+
if suppressed[i]:
191+
continue
192+
keep.append(i)
193+
194+
ix1 = x1[i]
195+
iy1 = y1[i]
196+
ix2 = x2[i]
197+
iy2 = y2[i]
198+
iarea = areas[i]
199+
200+
for _j in range(_i + 1, num_detections):
201+
j = order[_j]
202+
if suppressed[j]:
203+
continue
204+
205+
xx1 = max(ix1, x1[j])
206+
yy1 = max(iy1, y1[j])
207+
xx2 = min(ix2, x2[j])
208+
yy2 = min(iy2, y2[j])
209+
w = max(0, xx2 - xx1 + 1)
210+
h = max(0, yy2 - yy1 + 1)
211+
212+
inter = w * h
213+
ovr = inter / (iarea + areas[j] - inter)
214+
if ovr >= nms_thresh:
215+
suppressed[j] = True
216+
217+
return keep

0 commit comments

Comments
 (0)