Skip to content

ajay-vikram/RAMAN-Training

Repository files navigation

Raman Training for Human Activity Recognition

This contains the code used for training and quantizing compressed MBV1 on DVSGesture128.

File Structure

├──lib
│ ├──models                           --> contains the float, quantized and inference models
│ ├──quantization_utils               --> contains the quantization modules
│ ├──utils                            --> arguments parsing and other utilities
│ ├──quantize.py                      --> wrapper for quantization 
│ ├──train.py                         --> wrapper for training 
│ └──run.py                           --> base runner module for training/quantization
├──inference.py                       --> runs the real integer inference
├──main.py                            --> script starting point
├──save.py                            --> saves the quantized weights, biases and scales to CSVs
└──save_model_stat.py                 --> saves the model architecture details as a CSV

Argument Descriptions

Refer lib/utils/utils.py for all argument details.

  1. --data_dir: the root directory where the data is located.
  2. --proj: the name of the experiment.
  3. --model: the model to train.
  4. --train_epochs: training epochs.
  5. --QAT_epochs: quantization aware training epochs.
  6. --gpu: GPU index of GPU to be used - {0, 1, 2}.
  7. Use --train for training and --quantize for quantizing the model.

Model Descriptions

  1. brevitas_model_16K: 2-channel model with 5 DS layers.
  2. brevitas_model_19K: 8-channel model with 5 DS layers.
  3. brevitas_model_70K: 2-channel model with 7 DS layers.
  4. brevitas_model_71K: 8-channel model with 7 DS layers.
  5. brevitas_quant_model: The model for QAT(makes changes according to float model).
  6. brevitas_inference_model: The model to be used during real integer inferece.

Environment Setup

conda env create -f environment.yml

Running the scripts

Before training:

  1. Set --data_dir in utils.py or through command line
  2. Change file names in main.py

To train the float model

python main.py --train --proj {proj_name} --model {16K, 19K, 70K, 71K} --data_dir {root data directory}

To quantize the float model

python main.py --quantize --proj {proj_name} --model {16K, 19K, 70K, 71K}

To save the quantized weights, biases and scales as a CSV

python save.py --proj {proj_name}

To save model architecture details as a CSV file

python save_model_stat.py --proj {proj_name} --model {16K, 19K, 70K, 71K}

To run real integer inference and save the Activations

Change root data directory (--data_dir).
Change input file names in get_dataset() in inference.py

python inference.py --proj {proj_name} --model {16K, 19K, 70K, 71K} --data_dir {root data directory}

About

This repo contains the code used for training and quantizing compressed MobileNetV1 on the DVSGesture128 dataset.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages