Skip to content
View tonyyunyang's full-sized avatar
πŸ’­
πŸ“š
πŸ’­
πŸ“š

Block or report tonyyunyang

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
tonyyunyang/README.md

Hi there, I'm Tony Yang πŸ‘‹

Website Google Scholar Email

🧠 Researcher by day, AI tinkerer by night
πŸš€ Building cool stuff with AI β€” always under construction
πŸ’‘ Got an idea? Let's make it happen!

🎯 What I'm Up To

  • πŸ”¨ Building: AI-powered applications that actually solve problems (or at least try to)
  • πŸ§ͺ Experimenting: Breaking things in the name of science
  • 🌱 Learning: Whatever tech catches my eye this week
  • 🀝 Open to: Collabs, ventures, crazy ideas β€” hit me up!

"Move fast and build things" β€” me, probably


🧩 Featured Open Source

  • Scholar High Lights β€” Open-source Google Scholar extension for highlighting and organizing research papers. GitHub

πŸ”¬ Featured Research

Through the Eyes of Emotion

A Multi-faceted Eye Tracking Dataset for Emotion Recognition in Virtual Reality

Paper Code Dataset

Tongyun Yang†, Bishwas Regmi†, Lingyu Du, Andreas Bulling, Xucong Zhang, Guohao Lan
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT), 2025

A comprehensive eye-tracking dataset combining high-frame-rate periocular videos (120 fps) and high-frequency gaze data (240 Hz) collected from 26 participants in VR to enable accurate, multimodal emotion recognition based on Ekman's seven basic emotions.

Key Contributions:

  • First dataset with high-frame-rate periocular videos capturing micro-expressions in VR
  • 4Γ— higher frequency eye-tracking data compared to existing datasets
  • Open-source Unity-based data collection and Label Studio annotation tools

Pruning nnU-Net with Minimal Performance Loss

Paper Code

Tongyun Yang, Yidong Zhao, Qian Tao
Medical Imaging with Deep Learning (MIDL), 2025

Demonstrating that trained nnU-Net models contain substantial weight redundancy β€” over 80% of weights can be removed through simple magnitude-based pruning while maintaining a proxy Dice score of >0.95 across multiple medical segmentation tasks.

Key Findings:

  • 80%+ weight reduction with minimal performance loss
  • Applicable to both 2D and 3D nnU-Net configurations
  • Critical weights concentrate near encoder/decoder ends; bottleneck layers can be heavily pruned
  • Validated across four different medical image segmentation datasets

πŸ“š Other Publications

Paper Venue Links
Reverse Imaging: Any-Sequence Generalization for Cardiac MRI Segmentation MICCAI 2025 & IEEE TMI Paper Β· Code

πŸ› οΈ Tech Stack

Python PyTorch Unity C++ CUDA


πŸ“Š GitHub Stats

GitHub Stats


Interested in AI for healthcare, embedded systems, or multimodal sensing? Let's connect!

Popular repositories Loading

  1. Scholar-High-Lights Scholar-High-Lights Public

    JavaScript 2

  2. cuda_image_histogram cuda_image_histogram Public

    Forked from YifanJiangPolyU/cuda_playground

    Cuda

  3. nes_rust_emulator nes_rust_emulator Public

    Forked from SnoozeTime/nes

    NES emulator in Rust (WIP)

    Rust

  4. minimal_vim minimal_vim Public

    Forked from nvim-zh/minimal_vim

    A minimal Vim/Nvim configuration in just one file without external dependencies.

    Vim Script

  5. egui_node_graph egui_node_graph Public

    Forked from jorgeja/egui_node_graph

    Build your node graph applications in Rust, using egui

    Rust

  6. egui egui Public

    Forked from emilk/egui

    egui: an easy-to-use immediate mode GUI in Rust that runs on both web and native

    Rust