This repository contains the code for a Sign Language Recognition project. The project aims to recognize and interpret sign language gestures using deep learning techniques.
- Overview
- Features
- Dataset
- Installation
- Usage
- Streamlit Application
- Model Architecture
- Training
- Evaluation
- Results
- Contributing
- License
- References
This project focuses on developing a model to recognize sign language gestures from images by using the concepts of convolutional neural networks. The model is trained to identify various hand gestures that correspond to different signs in the sign language.
- Real-time Sign Language Recognition: The model can recognize gestures in real-time using a webcam .
- High Accuracy: Achieved through the use of advanced deep learning techniques.
- User-friendly Interface: An intuitive interface for users to interact with the model.
The dataset used in this project was custom-prepared by capturing images of each gesture and assigning corresponding labels. The process involved manually collecting data for various sign language gestures, ensuring comprehensive coverage of the targeted vocabulary.
The dataset structure includes:
Training Data: Organized in directories based on gesture classes.
Labels: Stored in sign_language_data, which maps class indices to labels.
Test Data: Placed in the data/test directory for evaluation purposes.
This custom dataset can be replaced or expanded to include additional gestures or languages.
Follow these steps to set up and run the project on your local machine:
Make sure you have the following installed:
- Python 3.10
- pip (Python package installer)
- TensorFlow or PyTorch (depending on the framework used)
- OpenCV (for video processing)
- Flask or Streamlit (for the web interface)
-
Clone the Repository:
git clone https://github.com/gowtham611/sign_language.git cd sign_language
-
Install Dependencies:
pip install -r requirements.txt
To run the application:
python example.py
The Streamlit app provides the following features:
Homepage: Introduction to the project and instructions for users.
Real-Time Recognition: Recognizes gestures in real-time using a webcam.
Interactive Games: Fun games that involve sign language gestures.
Chatbot: An integrated chatbot for user assistance and interaction.
Describe the architecture of your model here. For example:
- Convolutional Neural Network (CNN): Used for image-based gesture recognition.
- Transfer Learning: Using pre-trained models like VGG16, ResNet, etc., for better performance.(you can use pretrained model if you want to avoid training from scratch but i trained the model from scratch)
- Pooling Layers:To reduce spatial dimensions.
- Fully Connected Layers: For classification.
- Convolutional Layers: For feature extraction The model is implemented in the datacollection.ipynb file
Outline the training process here. Include information on data augmentation, optimization techniques, loss functions, etc.
Explain how the model is evaluated. Include metrics such as accuracy, precision, recall, F1-score, etc.
Provide details on the results achieved by the model. Include examples of predictions, accuracy scores. The results section highlights the performance of the model: Accuracy: 93%
Sample Predictions: Visual examples of input gestures and their predicted labels. It basically contains a real time recognition that is used to identify the gestures.
If you would like to contribute to this project, please follow these guidelines:
- Fork the repository.
- Create a new branch (
git checkout -b feature-branch
). - Make your changes.
- Commit your changes (
git commit -am 'Add new feature'
). - Push to the branch (
git push origin feature-branch
). - Create a new Pull Request.
This project currently does not have a license. If you intend to make your work publicly available and reusable, consider adding an open-source license. For example:
MIT License (permissive and widely used)
Apache License 2.0 (also permissive with patent rights)
GPLv3 (requires derivative works to be open-sourced)
Refer to Choose a License for guidance on selecting the appropriate license for your project.
List any references or resources used in the project. For example:
- TensorFlow Documentation: https://www.tensorflow.org/
- OpenCV Documentation: https://opencv.org/
- Streamlit Documentation: https://www.streamlit.org/