Skip to content

Code and dataset link for "DenseWorld-1M: Towards Detailed Dense Grounded Caption in the Real World"

License

Notifications You must be signed in to change notification settings

lxtGH/DenseWorld-1M

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 

Repository files navigation

DenseWorld-1M: Towards Detailed Dense Grounded Caption in the Real World

[🏠 DenseWorld-1M] [📜 arXiv] [🤗 HuggingFace] [🧑‍💻 GitHub]

Xiangtai Li1* · Tao Zhang1*; · Yanwei Li1* · Zilong Huang1 · Haobo Yuan2 · Yikang Zhou2 · Shihao Chen2 ·
Jiahao Meng3 · Yueyi Sun3 · Shilin Xu3 · Lu Qi1 · Yi Lin1 ·Wenhao Huang1 · Jiashi Feng1 · Guang Shi1

1Bytedance Seed    2Wuhan University    3Peking University    

† project lead * The first three authors equally contribute to the work.

Teaser

Comparison

Pipeline

Introduction

Multimodal Large Language Models (MLLMs) demonstrate a complex understanding of scenes, benefiting from large-scale and high-quality datasets. Most existing caption datasets lack the ground locations and relations for visual entities. Several grounded caption datasets face the problems of missing detailed descriptions, relations, and massive object descriptions on high-resolution images. To fill this gap for the community, we present DenseWorld-1M, the first massive, detailed, dense grounded caption dataset in the real world. We design a three-stage labeling pipeline, containing open-world perception, detailed object caption generation, and dense caption merging. The first stage obtains entity-level masks and labels. The second stage generates the object-level, detailed captions with the guidance of masks and labels from the first stage. The final stage merges object captions and masks into spatial and relational dense captions. To accelerate the labeling process and improve caption quality, we present two VLM models: the Detailed Region Caption model and the Spatial Caption Merging model. Extensive experiments on various settings, including vision-language understanding, visual grounding, and region caption generation, demonstrate the effectiveness of our DenseWorld-1M dataset and labeling models.

Visual Results

Teaser

News

  • The report is on arxiv.

To Do List

We are cleaning the dataset and the open-source procedure is also still under review process.

We will opensource the entire DenseWorld-1M dataset before the end of July on Huggingface.

  • Release training code for different model.
  • Release dataset

References

If you find this repository useful, please consider referring to the following paper:

@misc{li2025denseworld1m,
      title={DenseWorld-1M: Towards Detailed Dense Grounded Caption in the Real World}, 
      author={Xiangtai Li and Tao Zhang and Yanwei Li and Haobo Yuan and Shihao Chen and Yikang Zhou and Jiahao Meng and Yueyi Sun and Shilin Xu and Lu Qi and Tianheng Cheng and Yi Lin and Zilong Huang and Wenhao Huang and Jiashi Feng and Guang Shi},
      year={2025},
      eprint={2506.24102},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2506.24102}, 
}

About

Code and dataset link for "DenseWorld-1M: Towards Detailed Dense Grounded Caption in the Real World"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published