DToMA is a training-free method designed to enhance the efficiency and comprehension capabilities of Video Large Language Models (VideoLLMs) in long video understanding tasks. Inspired by human cognitive reasoning processes, DToMA dynamically manipulates visual tokens across three stages β shallow, intermediate, and deep β leading to significant computational savings without sacrificing performance.
(2025.04.28) DToMA is accepted by IJCAI 2025! We will upload the code as soon as possible.
- Training-free approach β No need to fine-tune models.
- Generalizes across architectures β Works with various VideoLLM backbones.
- Three-stage reasoning optimization β Tailored to mimic human cognition.
- Efficiency gains β Up to 70% reduction in visual tokens with minimal or no loss in performance.