Skip to content

Danse117/Vox-Manus

Repository files navigation

Vox Manus 🤟


ASL Recognition App built with YOLOv9 and Flutter


Overview

Vox Manus is an American Sign Language (ASL) recognition application that uses state-of-the-art YOLOv9 object detection and Flutter for cross-platform deployment.

🛠️ Tech Stack

  • Machine Learning: YOLOv9, TensorFlow Lite
  • Frontend: Flutter, Dart
  • Backend: Firebase
  • Development Tools:
    • Android Studio
    • VS Code
    • Google Colab (for model training)

📁 Project Structure

vox_manus/
├── lib/ # Flutter source code
│ ├── views/ # UI screens
│ ├── services/ # Business logic and services
│ ├── components/ # Reusable widgets
│ └── main.dart # Entry point
│
├── assets/ # Static assets
│ ├── images/
│ └── fonts/
│
├── models/ # Trained models (.tflite)
│  ├── First_Train/
│  └── Second_Train/
│
├── test/ # Test files
├── android/ # Android-specific code
├── ios/ # iOS-specific code
├── web/ # Web-specific code
│
├── pubspec.yaml # Flutter dependencies
├── README.md # Project documentation
└── .gitignore # Git ignore file

📋 Prerequisites


📋 ASL YOLOv9 Model Benchmarks

First Train With 20 Epochs


Second Training With 50 Epochs


📱 Mobile Application Updates

Homepage Version 1

Homepage Version 2

Homepage Version 3

Login Page


📚 Documentation

Tutorial's Referenced:


🏃‍♀️‍➡️ TODO / Things to fix

  • Intergrate custom model to work with CameraWidget (First_Train, Second_Train)
  • Consider using Firebase to store/run trained models
  • Implement Text to Gesture translation using an API
  • Save user translation's locally to the user's device
  • Possibly move the project to IOS instead of Android if the TensorFlow packages still have issues

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published