Skip to content

RaikaSurendra/denseposeWifiDummy

Repository files navigation

DensePose_from_WiFi

Using of the WiFi signal in combination with deep learning architectures, commonly used in computer vision, to estimate dense human pose correspondence.

Teaser image

Note: This is a fork of the original project DensePose_from_WiFi. This fork includes modifications to run the project with dummy data for testing purposes, along with updated installation instructions for modern environments (including Apple Silicon).

Original Project Description

VITON-HD: High-Resolution Virtual Try-On via Misalignment-Aware Normalization

Abstract: Advances in computer vision and machine learning techniques have led to significant development in 2D and 3D human pose estimation from RGB cameras, LiDAR, and radars. However, human pose estimation from images is adversely affected by occlusion and lighting, which are common in many scenarios of interest. Radar and LiDAR technologies, on the other hand, need specialized hardware that is expensive and power-intensive. Furthermore, placing these sensors in non-public areas raises significant privacy concerns. To address these limitations, recent research has explored the use of WiFi antennas (1D sensors) for body segmentation and key-point body detection. This paper further expands on the use of the WiFi signal in combination with deep learning architectures, commonly used in computer vision, to estimate dense human pose correspondence. We developed a deep neural network that maps the phase and amplitude of WiFi signals to UV coordinates within 24 human regions. The results of the study reveal that our model can estimate the dense pose of multiple subjects, with comparable performance to image-based approaches, by utilizing WiFi signals as the only input. This paves the way for low-cost, broadly accessible, and privacy-preserving algorithms for human sensing.

Installation

Clone this repository:

git clone https://github.com/superstar1225/DensePose_from_WiFi.git
cd ./DensePose_from_WiFi/

Set up Environment

This project uses a Python virtual environment.

  1. Create a virtual environment:

    python3 -m venv venv
  2. Activate the virtual environment:

    source venv/bin/activate
  3. Install dependencies:

    pip install -r requirements.txt

    Note for Apple Silicon (M1/M2/M3) Users: The requirements.txt is configured for macOS ARM64 (tensorflow-macos). If you are on a different platform, you may need to adjust the tensorflow requirement.

Usage

1. Generate Dummy Data

If you don't have the original dataset, you can generate dummy data to test the pipeline:

python generate_dummy_data.py

This will create 100 sample .mat files in the data/ directory.

2. Run the Model

Run the CNN+RNN model:

python "Deep Learning/CNN_RNN.py" 0

(The 0 argument specifies the GPU index, or use it to run on default device).

About

Fork of DensePose from WiFi — modified for dummy data testing on Apple Silicon

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors