Skip to content

yuanrr/DToMA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

2 Commits
Β 
Β 
Β 
Β 

Repository files navigation

DToMA: Training-free Dynamic Token MAnipulation for Long Video Understanding

DToMA is a training-free method designed to enhance the efficiency and comprehension capabilities of Video Large Language Models (VideoLLMs) in long video understanding tasks. Inspired by human cognitive reasoning processes, DToMA dynamically manipulates visual tokens across three stages β€” shallow, intermediate, and deep β€” leading to significant computational savings without sacrificing performance.

πŸš€ Updates

(2025.04.28) DToMA is accepted by IJCAI 2025! We will upload the code as soon as possible.

πŸ” Key Features

  • Training-free approach – No need to fine-tune models.
  • Generalizes across architectures – Works with various VideoLLM backbones.
  • Three-stage reasoning optimization – Tailored to mimic human cognition.
  • Efficiency gains – Up to 70% reduction in visual tokens with minimal or no loss in performance.

About

DToMA: Training-free Dynamic Token MAnipulation for Long Video Understanding (IJCAI' 25)

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published