ASL Recognition App built with YOLOv9 and Flutter
Vox Manus is an American Sign Language (ASL) recognition application that uses state-of-the-art YOLOv9 object detection and Flutter for cross-platform deployment.
- Machine Learning: YOLOv9, TensorFlow Lite
- Frontend: Flutter, Dart
- Backend: Firebase
- Development Tools:
- Android Studio
- VS Code
- Google Colab (for model training)
vox_manus/
├── lib/ # Flutter source code
│ ├── views/ # UI screens
│ ├── services/ # Business logic and services
│ ├── components/ # Reusable widgets
│ └── main.dart # Entry point
│
├── assets/ # Static assets
│ ├── images/
│ └── fonts/
│
├── models/ # Trained models (.tflite)
│ ├── First_Train/
│ └── Second_Train/
│
├── test/ # Test files
├── android/ # Android-specific code
├── ios/ # iOS-specific code
├── web/ # Web-specific code
│
├── pubspec.yaml # Flutter dependencies
├── README.md # Project documentation
└── .gitignore # Git ignore file
- Flutter (3.0 or higher)
- Android Studio with Android SDK
- Git
- Python (3.8 or higher)
- pip (for Python packages)
First Train With 20 Epochs
Second Training With 50 Epochs
Homepage Version 1
Homepage Version 2
Homepage Version 3
Login Page
- Ultralytics Documentation
- YOLOv9 Documentation
- Ultralytics YOLOv9 Documentation
- ASL Alphabet Dataset
- Google ML Kit Image Labeling
- Google ML Kit Object Detection
Tutorial's Referenced:
- How to Train Ultralytics YOLOv8 Models on Your Custom Dataset in Google Colab | Episode 3
- Flutter Basic Training - 12 Minute Bootcamp
- Flutter Authentication Tutorials by Mitch Koko
- Intergrate custom model to work with CameraWidget (First_Train, Second_Train)
- Consider using Firebase to store/run trained models
- Implement Text to Gesture translation using an API
- Save user translation's locally to the user's device
- Possibly move the project to IOS instead of Android if the TensorFlow packages still have issues