This repository contains the code for a smart glasses application designed to capture and process live camera feeds, detect faces, and integrate with IMU (Inertial Measurement Unit) data for orientation adjustments.
The project includes a Python script (liveCamFeed.py
) and a Lua script (live_camera_frame_app.lua
) that work together to handle camera feed processing, face detection, and IMU data streaming on a Frame device (smart glasses). The code leverages Edge Impulse's FOMO model for face detection and DeepFace for face recognition, with a Kalman filter for smoothing IMU data.
tests/
: Directory for test-related files.frameutils/
: Utility scripts or modules for Frame device interaction.face_detection_log.txt
: Log file recording face detection events (67 KB).face_detected/
: Folder storing detected face images.liveCamFeed.py
: Main Python script for camera feed processing and face recognition (37 KB).lua/
: Directory containing Lua scripts.live_camera_frame_app.lua
: Lua script running on the Frame device for camera and IMU control (7 KB).
faces_db/
: Directory for storing known face images.model/
: Directory for machine learning models (e.g.,face-detection.eim
).
- Camera Feed Processing: Captures live video, applies auto-exposure adjustments, and sends images to the host for processing.
- Face Detection: Uses the Edge Impulse FOMO model to detect faces in the camera feed, with preprocessing to handle device orientation.
- Face Recognition: Employs DeepFace to match detected faces against a database in
faces_db/
. - IMU Integration: Streams IMU data to adjust image orientation and smooth motion using a Kalman filter.
- Logging: Records detection details (coordinates, confidence, identity) in
face_detection_log.txt
.
- Ensure required libraries are installed:
opencv-python
,numpy
,Pillow
,edge_impulse_linux
,deepface
. - Place the trained FOMO model (
face-detection.eim
) in themodel/
directory. - Populate
faces_db/
with known face images (JPEG/PNG). - Run
liveCamFeed.py
to start the application.
- The Python script initializes a display window showing the live feed with detected faces and auto-exposure parameters.
- The Lua script runs on the Frame device, handling camera settings, IMU data, and communication with the host.
- Press
Esc
to exit the application.
- Calibration of the IMU is performed at startup to set initial pitch and roll values.
- Detected faces and full camera images are saved in
face_detected/
with timestamps. - Logs are appended to
face_detection_log.txt
for debugging and analysis.
- Python libraries:
asyncio
,cv2
,numpy
,Pillow
,edge_impulse_linux
,deepface
. - Frame device Lua libraries:
data
,battery
,camera
,code
,plain_text
,imu
.
Welcome to the complete codebase of the Frame hardware. For regular usage, check out the docs here.
The codebase is split into three sections. The nRF52 Application, the nRF52 Bootloader, and the FPGA RTL.
The nRF52 is designed to handle the overall system operation. It runs Lua, as well as handle Bluetooth networking, AI tasks and power management. The FPGA meanwhile, simply handles acceleration of the graphics and camera.
-
Ensure you have the ARM GCC Toolchain installed.
-
Ensure you have the nRF Command Line Tools installed.
-
Ensure you have nRF Util installed, along with the
device
andnrf5sdk-tools
subcommands../nrfutil install device ./nrfutil install nrf5sdk-tools
-
Clone this repository and initialize any submodules:
git clone https://github.com/brilliantlabsAR/frame-codebase.git cd frame-codebase git submodule update --init
-
You should now be able to build and flash the project to an nRF52840 DK by calling the following commands from the
frame-codebase
folder.make release make erase-jlink # Unlocks the flash protection if needed make flash-jlink
-
Open the project in VSCode.
There are some build tasks already configured within
.vscode/tasks.json
. Access them by pressingCtrl-Shift-P
(Cmd-Shift-P
on MacOS) →Tasks: Run Task
.Try running the
Build
task. The project should build normally.You many need to unlock the device by using the
Erase
task before programming or debugging. -
To enable IntelliSense, be sure to select the correct compiler from within VSCode.
Ctrl-Shift-P
(Cmd-Shift-P
on MacOS) →C/C++: Select IntelliSense Configuration
→Use arm-none-eabi-gcc
. -
Install the Cortex-Debug extension for VSCode in order to enable debugging.
-
A debugging launch is already configured within
.vscode/launch.json
. Run theApplication (J-Link)
launch configuration from theRun and Debug
panel, or pressF5
. The project will automatically build and flash before launching. -
To monitor the logs, run the task
RTT Console (J-Link)
and ensure theApplication (J-Link)
launch configuration is running. -
To debug using Black Magic Probes, follow the instructions here.
The complete FPGA architecture is described in the documentation here.
The FPGA RTL is prebuilt and included in fpga_application.h
for convenience. If you wish to modify the FPGA RTL, follow the instructions here.