Datasets:

Modalities:
Video
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
License:
File size: 5,312 Bytes
ea33acb
 
 
 
 
 
 
 
 
dc2dff9
ea33acb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1c6b631
ea33acb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
705aded
ea33acb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
---
license: apache-2.0
---


# VideoHallu: Evaluating and Mitigating Multi-modal Hallucinations for Synthetic Videos

[Zongxia Li*](https://zli12321.github.io/), [Xiyang Wu*](https://wuxiyang1996.github.io/), [Yubin Qin](https://www.linkedin.com/in/yubin-qin/), [Guangyao Shi](https://guangyaoshi.github.io/), [Hongyang Du](https://www.linkedin.com/in/hongyangdu/), [Dinesh Manocha](https://www.cs.umd.edu/people/dmanocha), [Tianyi Zhou](https://tianyizhou.github.io/), [Jordan Lee Boyd-Graber](https://users.umiacs.umd.edu/~ying/)

[[📖 Paper](https://github.com/zli12321/VideoHallu/blob/main/paper.pdf)] [[🤗 Dataset](https://huggingface.co/datasets/zli12321/VideoHalluB)][[🌍Website](https://smashedpython.github.io/videohallu.github.io/)]



## 👀 About VideoHallu

With the recent success of video generation models such as [Sora](https://openai.com/sora/), [Veo2](https://veo2.ai), [Kling](https://www.klingai.com/global/), the visual quality of generated videos has reached new heights—making evaluation more challenging and pushing it beyond traditional metrics like frame consistency, resolution, and realism. However, we find that MLLMs struggle to detect abnormalities in generated videos, which is crucial for developing reliable automatic video evaluation methods.

We introduce VideoHallu, a curated dataset that includes videos generated by seven video generation models and a question-answer set to test MLLM's abilities to catch generated videos' abnormalities.

We also use GRPO to train [Qwen-2.5-VL-7B](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) on a subset of our dataset and show improvement on generated video understanding.


## 🔥 News
- [2025/05/02] We release our datasets in huggingface🤗.

## 🔍 Dataset

To facilitate GRPO training, we also randomly sample 1,000 videos from [PhysBench](https://huggingface.co/datasets/WeiChow/PhysBench-train) training data to first improve model' reasoning abilities in real-world videos, then train the model on part of our synthetic videos.

Our data spans the following categories:

<img src="./images/fig1.png" style="zoom:35%;" />


## Getting Started

```
# Download the dataset
pip install huggingface_hub

# Download data to your local dir
huggingface-cli download IntelligenceLab/VideoHallu --repo-type dataset --local-dir ./new_video_folders --local-dir-use-symlinks False
```


## The Dawn of MLLMs in Synthetic Videos 🧠 
---

<!-- 🐦 Quail to Rooster -->
<div style="border: 2px solid #ddd; border-radius: 10px; padding: 16px; margin-bottom: 20px; background-color: #f9f9f9;">

<details open>
<summary><strong>🎬 Video:</strong> Quail Transforming into Rooster</summary>

<p><strong>Prompt (Sora):</strong> Generate a quail and a rooster celebrating New Year.</p>

<p align="center" style="margin: 0;">
  <img src="images/rooster.gif" width="400"/>
  <img src="images/131021746146018_.pic.jpg" width="500"/>
</p>

</details>
</div>

---

<!-- 🪶 Feather vs. Rock -->
<div style="border: 2px solid #ddd; border-radius: 10px; padding: 16px; margin-bottom: 20px; background-color: #f9f9f9;">
  
<details open>
<summary><strong>🎬 Video:</strong> Object Falling and Law of Physics</summary>
<p><strong>Prompt (Veo2):</strong> A feather and a heavy rock are released at the same height and begin to fall to the ground on Earth.</p>
<p align="center" style="margin: 0;">
  <img src="images/feather_veo2.gif" width="400"/>
  <img src="images/130281746130630_.pic.jpg" width="500"/>
</p>
</details>
</div>

---

<!-- 🍷 Wine Drinking -->
<div style="border: 2px solid #ddd; border-radius: 10px; padding: 16px; margin-bottom: 20px; background-color: #f9f9f9;">
<details open>
<summary><strong>🎬 Video:</strong> Object Contact Abnormalities</summary>
<p><strong>Prompt (Sora):</strong> Generate a man drinking up a cup of wine.</p>
<p align="center" style="margin: 0;">
  <img src="images/man_drinking_wine.gif" width="500"/>
  <img src="images/130291746131015_.pic.jpg" width="600"/>
</p>
</details>
</div>

---

<!-- 🍉 Bullet and Watermelon -->
<div style="border: 2px solid #ddd; border-radius: 10px; padding: 16px; margin-bottom: 20px; background-color: #f9f9f9;">
<details open>
<summary><strong>🎬 Video:</strong> Breaking Process</summary>
<p><strong>Prompt (Sora):</strong> Generate the sequence showing a bullet being shot into a watermelon.</p>
<p align="center" style="margin: 0;">
  <img src="images/watermelon_explode-ezgif.com-video-to-gif-converter.gif" width="400"/>
  <img src="images/133151746288503_.pic.jpg" width="500"/>
</p>
</details>
</div>



## Acknowledgements

We sincerely appreciate the contributions of the open-source community. The related projects are as follows: [R1-V](https://github.com/Deep-Agent/R1-V) , [DeepSeek-R1](https://github.com/deepseek-ai/DeepSeek-R1) , [Video-R1](https://github.com/tulerfeng/Video-R1), [Qwen-2.5-VL](https://arxiv.org/abs/2502.13923)

## Citations

If you find our work helpful for your research, please consider citing our work.   

```
@article{feng2025video,
  title={Video-R1: Reinforcing Video Reasoning in MLLMs},
  author={Feng, Kaituo and Gong, Kaixiong and Li, Bohao and Guo, Zonghao and Wang, Yibing and Peng, Tianshuo and Wang, Benyou and Yue, Xiangyu},
  journal={arXiv preprint arXiv:2503.21776},
  year={2025}
}
```