Datasets:

Modalities:
Video
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
License:
wuxiyang commited on
Commit
96ebd1c
ยท
verified ยท
1 Parent(s): c1fb11f

Upload 29 files

Browse files
README.md CHANGED
@@ -1,117 +1,194 @@
1
- ---
2
- license: apache-2.0
3
- ---
4
 
 
5
 
6
- # VideoHallu: Evaluating and Mitigating Multi-modal Hallucinations for Synthetic Videos
7
 
8
- [Zongxia Li*](https://zli12321.github.io/), [Xiyang Wu*](https://wuxiyang1996.github.io/), [Yubin Qin](https://www.linkedin.com/in/yubin-qin/), [Guangyao Shi](https://guangyaoshi.github.io/), [Hongyang Du](https://www.linkedin.com/in/hongyangdu/), [Dinesh Manocha](https://www.cs.umd.edu/people/dmanocha), [Tianyi Zhou](https://tianyizhou.github.io/), [Jordan Lee Boyd-Graber](https://users.umiacs.umd.edu/~ying/)
9
 
10
- [[๐Ÿ“– Paper](https://arxiv.org/abs/2505.01481)] [[๐Ÿค— Dataset](https://huggingface.co/datasets/zli12321/VideoHalluB)][[๐ŸŒWebsite](https://smashedpython.github.io/videohallu.github.io/)]
11
 
 
12
 
 
13
 
14
- ## ๐Ÿ‘€ About VideoHallu
 
 
 
 
 
15
 
16
- With the recent success of video generation models such as [Sora](https://openai.com/sora/), [Veo2](https://veo2.ai), [Kling](https://www.klingai.com/global/), the visual quality of generated videos has reached new heightsโ€”making evaluation more challenging and pushing it beyond traditional metrics like frame consistency, resolution, and realism. However, we find that MLLMs struggle to detect abnormalities in generated videos, which is crucial for developing reliable automatic video evaluation methods.
17
 
18
- We introduce VideoHallu, a curated dataset that includes videos generated by seven video generation models and a question-answer set to test MLLM's abilities to catch generated videos' abnormalities.
 
 
 
 
 
 
 
 
 
19
 
20
- We also use GRPO to train [Qwen-2.5-VL-7B](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) on a subset of our dataset and show improvement on generated video understanding.
21
 
 
 
 
 
 
22
 
23
- ## ๐Ÿ”ฅ News
24
- - [2025/05/02] We release our datasets in huggingface๐Ÿค—.
25
 
26
- ## ๐Ÿ” Dataset
 
 
27
 
28
- To facilitate GRPO training, we also randomly sample 1,000 videos from [PhysBench](https://huggingface.co/datasets/WeiChow/PhysBench-train) training data to first improve model' reasoning abilities in real-world videos, then train the model on part of our synthetic videos.
29
 
30
- Our data spans the following categories:
31
 
32
- <img src="./images/fig1.png" style="zoom:35%;" />
33
 
 
34
 
35
- ## Getting Started
36
 
37
  ```
38
- # Download the dataset
39
  pip install huggingface_hub
40
 
41
  # Download data to your local dir
42
  huggingface-cli download IntelligenceLab/VideoHallu --repo-type dataset --local-dir ./new_video_folders --local-dir-use-symlinks False
 
 
 
 
 
 
43
  ```
44
 
45
 
46
- ## The Dawn of MLLMs in Synthetic Videos ๐Ÿง 
47
- ---
48
 
49
- <!-- ๐Ÿฆ Quail to Rooster -->
50
- <div style="border: 2px solid #ddd; border-radius: 10px; padding: 16px; margin-bottom: 20px; background-color: #f9f9f9;">
51
 
52
- <details open>
53
- <summary><strong>๐ŸŽฌ Video:</strong> Quail Transforming into Rooster</summary>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
54
 
55
- <p><strong>Prompt (Sora):</strong> Generate a quail and a rooster celebrating New Year.</p>
56
 
57
- <p align="center" style="margin: 0;">
58
- <img src="images/rooster.gif" width="400"/>
59
- <img src="images/131021746146018_.pic.jpg" width="500"/>
60
  </p>
61
 
62
- </details>
63
- </div>
 
64
 
65
- ---
 
 
 
66
 
67
- <!-- ๐Ÿชถ Feather vs. Rock -->
68
- <div style="border: 2px solid #ddd; border-radius: 10px; padding: 16px; margin-bottom: 20px; background-color: #f9f9f9;">
69
-
70
- <details open>
71
- <summary><strong>๐ŸŽฌ Video:</strong> Object Falling and Law of Physics</summary>
72
- <p><strong>Prompt (Veo2):</strong> A feather and a heavy rock are released at the same height and begin to fall to the ground on Earth.</p>
73
- <p align="center" style="margin: 0;">
74
- <img src="images/feather_veo2.gif" width="400"/>
75
- <img src="images/130281746130630_.pic.jpg" width="500"/>
76
  </p>
77
- </details>
78
- </div>
79
-
80
- ---
81
-
82
- <!-- ๐Ÿท Wine Drinking -->
83
- <div style="border: 2px solid #ddd; border-radius: 10px; padding: 16px; margin-bottom: 20px; background-color: #f9f9f9;">
84
- <details open>
85
- <summary><strong>๐ŸŽฌ Video:</strong> Object Contact Abnormalities</summary>
86
- <p><strong>Prompt (Sora):</strong> Generate a man drinking up a cup of wine.</p>
87
- <p align="center" style="margin: 0;">
88
- <img src="images/man_drinking_wine.gif" width="500"/>
89
- <img src="images/130291746131015_.pic.jpg" width="600"/>
90
  </p>
91
- </details>
92
- </div>
93
-
94
- ---
95
-
96
- <!-- ๐Ÿ‰ Bullet and Watermelon -->
97
- <div style="border: 2px solid #ddd; border-radius: 10px; padding: 16px; margin-bottom: 20px; background-color: #f9f9f9;">
98
- <details open>
99
- <summary><strong>๐ŸŽฌ Video:</strong> Breaking Process</summary>
100
- <p><strong>Prompt (Sora):</strong> Generate the sequence showing a bullet being shot into a watermelon.</p>
101
- <p align="center" style="margin: 0;">
102
- <img src="images/watermelon_explode-ezgif.com-video-to-gif-converter.gif" width="400"/>
103
- <img src="images/133151746288503_.pic.jpg" width="500"/>
104
  </p>
105
- </details>
106
- </div>
107
 
 
 
 
 
 
 
 
108
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
109
 
110
- ## Acknowledgements
111
 
112
  We sincerely appreciate the contributions of the open-source community. The related projects are as follows: [R1-V](https://github.com/Deep-Agent/R1-V) , [DeepSeek-R1](https://github.com/deepseek-ai/DeepSeek-R1) , [Video-R1](https://github.com/tulerfeng/Video-R1), [Qwen-2.5-VL](https://arxiv.org/abs/2502.13923)
113
 
114
- ## Citations
115
 
116
  If you find our work helpful for your research, please consider citing our work.
117
 
 
1
+ # VideoHallu: Evaluating and Mitigating Multi-modal Hallucinations for Synthetic Videos
 
 
2
 
3
+ [Zongxia Li*](https://zli12321.github.io/), [Xiyang Wu*](https://wuxiyang1996.github.io/), [Yubin Qin](https://www.linkedin.com/in/yubin-qin/), [Hongyang Du](https://smashedpython.github.io/HongyangDu.github.io/), [Guangyao Shi](https://guangyaoshi.github.io/), [Dinesh Manocha](https://www.cs.umd.edu/people/dmanocha), [Tianyi Zhou](https://tianyizhou.github.io/), [Jordan Lee Boyd-Graber](https://users.umiacs.umd.edu/~ying/)
4
 
5
+ [[๐Ÿ“– Paper](https://arxiv.org/abs/2505.01481)] [[๐Ÿค— Dataset](https://huggingface.co/datasets/zli12321/VideoHalluB)] [[๐ŸŒWebsite](https://wuxiyang1996.github.io/videohallu_page/)]
6
 
7
+ <img src="./images/teaser.png" style="zoom:20%;" />
8
 
9
+ ## ๐Ÿ‘€ About VideoHallu
10
 
11
+ Synthetic video generation using foundation models has gained significant attention due to its realism and broad applications. However, while these models excel at generating visually coherent and high-quality video frames, they often overlook commonsense reasoning and physical law violations, leading to abnormal content. Existing score-based evaluations like [VideoScore](https://arxiv.org/abs/2406.15252) mainly focus on general video quality and do not take these abnormalities into account, and offer no explanations of the evaluation results. A more promising evaluation approach is to leverage multi-modal large language models (MLLMs) as interpretable video evaluators, following the approach of [FActScore](https://arxiv.org/abs/2305.14251). However, how well MLLMs can detect these abnormalities in synthetic videos is underexplored.
12
 
13
+ Motivated by a more interpretable video generation evaluation, we introduce VideoHallu, a benchmark built from synthetic videos produced by popular models like [Sora](https://openai.com/sora/), [Veo2](https://veo2.ai), [Kling](https://www.klingai.com/global/), paired with expert-crafted question-answering pair examples easily solvable with human-level perception and reasoning across multiple categories. We evaluate several State-of-the-Art (SoTA) MLLMs with our benchmark, including [GPT-4o](https://openai.com/index/hello-gpt-4o/), [Gemini-2.5-Pro](https://deepmind.google/technologies/gemini/pro/), [Qwen-2.5-VL](https://github.com/QwenLM/Qwen2.5-VL), and forefront models like [Video-R1](https://github.com/tulerfeng/Video-R1) and [VideoChat-R1](https://github.com/OpenGVLab/VideoChat-R1). Despite the strong performance of R1 MLLMs on real-world video benchmarks like [MVBench](https://huggingface.co/datasets/OpenGVLab/MVBench) and [MovieChat](https://github.com/rese1f/MovieChat), these models still struggle and hallucinate on basic commonsense and physics reasoning tasks in synthetic videos, highlighting synthetic video hallucination as an underexplored challenge.
14
 
15
+ Moreover, we post-train current SoTA MLLMs, [Qwen-2.5-VL-7B](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct), with [Group Relative Policy Optimization (GRPO)](https://arxiv.org/abs/2501.12948) using both real-world and synthetic commonsense/physics datasets. Our results show improved overall accuracy compared to the base model, achieving the highest performance among all models, highlighting the importance of integrating high-quality counterexamples to enhance commonsense and physics reasoning in MLLMs' language priors.
16
+
17
+ ## ๐Ÿ”ฅ News
18
+ - [2025/05/02] We expand our dataset with more QA pairs๐Ÿค—.
19
+ - [2025/05/02] We release our [datasets](https://huggingface.co/datasets/IntelligenceLab/VideoHallu)๐Ÿค—.
20
+ - [2025/05/02] We release our GRPO free-form [RewardModel](https://huggingface.co/IntelligenceLab/RewardPreferenceBert/settings)๐Ÿค—.
21
 
 
22
 
23
+ ## Table of Contents
24
+ * [Benchmark](#benchmark)
25
+ * [Getting Started](#setup)
26
+ * [The Dawn of MLLMs in Synthetic Videos](#showcase)
27
+ * [Evaluation over SoTA MLLMs](#evaluation)
28
+ * [Reward Model](#rb)
29
+ * [Training](#training)
30
+ * [Fine-tuning Results](#evaluation_ft)
31
+ * [Acknowledgements](#ak)
32
+ * [Citations](#citations)
33
 
34
+ ## ๐Ÿ” <a name='benchmark'></a>Benchmark
35
 
36
+ We design our benchmark, VideoHallu, around four question categories aimed at probing hallucinations in synthetic video understanding, organized by the level of reasoning required from MLLMs to perform video-question answering in practice. The benchmark spans from perceptual understanding to high-level abstract reasoning.
37
+ * **Alignment** checks if the model correctly identifies and understands entities using visual and textual cues.
38
+ * **Spatial-temporal Consistency** examines whether the model can track entity motion across frames.
39
+ * **Common Sense Reasoning** tests if the model can reason based on its knowledge.
40
+ * **Physics** assesses if the model applies physical laws to entity motions and procedural understanding.
41
 
42
+ Each question in a category may also be assigned to multiple sub-categories, depending on the specific aspects it targets. Detailed annotations and sub-category breakdowns are available [here](https://huggingface.co/datasets/zli12321/VideoHalluB):
 
43
 
44
+ | Updated on | HuggingFace | Dataset Size |
45
+ |-------------|:------------------------------------------------:|:------------:|
46
+ | May, 2, 2025 | [HuggingFace](https://huggingface.co/datasets/zli12321/VideoHalluB) | 3233 |
47
 
48
+ Below is an overview of our benchmarkโ€™s organization, including the high-level question categories, ranked by the level of reasoning required by MLLMs, and their corresponding sub-category breakdowns.
49
 
50
+ <img src="./images/fig1.png" style="zoom:20%;" />
51
 
 
52
 
53
+ ## ๐Ÿ“– <a name='setup'></a>Getting Started
54
 
55
+ To set up our benchmark, please follow the steps provided below:
56
 
57
  ```
58
+ # Download the synthetic dataset
59
  pip install huggingface_hub
60
 
61
  # Download data to your local dir
62
  huggingface-cli download IntelligenceLab/VideoHallu --repo-type dataset --local-dir ./new_video_folders --local-dir-use-symlinks False
63
+
64
+ # Download and unzip the physben training data videos
65
+ curl -L -o video.part1.rar https://huggingface.co/datasets/WeiChow/PhysBench-train/resolve/main/video.part1.rar
66
+
67
+ # Unzip data (linux system)
68
+ unrar x video.part1.rar
69
  ```
70
 
71
 
 
 
72
 
 
 
73
 
74
+ ## <a name='showcase'></a>๐Ÿง  The Dawn of MLLMs in Synthetic Videos
75
+
76
+ We present selected cases from SoTA MLLM evaluations across each category. Hallucinations in model answers, common sense or physics violations in videos, and other notable cues in the video, questions, or ground truth are highlighted to assist the reader's understanding. More examples can be found in the Appendix of [our paper](https://arxiv.org/abs/2505.01481).
77
+
78
+ **Note:** The legend below explains all the symbols used to represent the State-of-the-Art (SoTA) MLLMs featured in our showcases for synthetic video generation and video question-answering.
79
+ <p align="center">
80
+ <img src="images/legend.png" width="700"/>
81
+ </p>
82
+
83
+ ### Alignment
84
+ **๐Ÿ—ฃ๏ธ Video Generation Prompt:** A young male athlete is playing basketball on an outdoor court, performing impressive dribbling and slam dunks.
85
+
86
+ **๐ŸŽฌ Synthetic Video:**
87
+
88
+ <p align="center">
89
+ <img src="images/alignment.gif" width="700"/>
90
+ </p>
91
+
92
+ **๐Ÿค– Video Question-Answering by MLLMs:**
93
+ <p align="center">
94
+ <img src="./images/alignment.png" width="700" />
95
+ </p>
96
+
97
+ ### Spatial-temporal Consistency
98
+ **๐Ÿ—ฃ๏ธ Video Generation Prompt:** Generate a quail and a rooster celebrating New Year.
99
+
100
+ **๐ŸŽฌ Synthetic Video:**
101
+ <p align="center">
102
+ <img src="images/rooster.gif" width="700"/>
103
+ </p>
104
+
105
+ **๐Ÿค– Video Question-Answering by MLLMs:**
106
+ <p align="center">
107
+ <img src="./images/STC.png" width="700" />
108
+ </p>
109
+
110
+ ### Common Sense Reasoning
111
+ **๐Ÿ—ฃ๏ธ Video Generation Prompt:** A feather and a heavy rock are released at the same height and begin to fall to the ground on Earth.
112
+
113
+ **๐ŸŽฌ Synthetic Video:**
114
+ <p align="center">
115
+ <img src="images/feather_veo2.gif" width="700"/>
116
+ </p>
117
 
118
+ **๐Ÿค– Video Question-Answering by MLLMs:**
119
 
120
+ <p align="center">
121
+ <img src="./images/CSR.png" width="700" />
 
122
  </p>
123
 
124
+ ### Physics
125
+ **๐Ÿ—ฃ๏ธ Video Generation Prompt:** Generate the sequence showing a bullet being shot into a watermelon.
126
+ <div style="border: 2px solid #ddd; border-radius: 10px; padding: 16px; background-color: #f9f9f9; box-shadow: 1px 1px 5px rgba(0,0,0,0.05);">
127
 
128
+ **๐ŸŽฌ Synthetic Video:**
129
+ <p align="center">
130
+ <img src="images/watermelon_explode-ezgif.com-video-to-gif-converter.gif" width="700"/>
131
+ </p>
132
 
133
+ **๐Ÿค– Video Question-Answering by MLLMs:**
134
+ <p align="center">
135
+ <img src="./images/P.png" width="700" />
 
 
 
 
 
 
136
  </p>
137
+
138
+ ## <a name='evaluation'></a>๐Ÿ“Š Evaluation over SoTA MLLMs
139
+ We evaluate diverse SoTA models across sizes and training strategies, reporting both overall and sub-category accuracies. Qwen2.5-VL-32B achieves the highest overall performance among all models.
140
+ <p align="center">
141
+ <img src="images/all_results.png" style="zoom:20%;" />
 
 
 
 
 
 
 
 
142
  </p>
143
+
144
+ We evaluate SoTA MLLMs on VideoHallu, with results broken down by sub-category. From left to right, we show: (a) models under 7B parameters; (b) models between 7Bโ€“38B; (c) R1 fine-tuned models; and (d) large black-box MLLMs. While many perform well on alignment tasks, they remain prone to hallucinations in reasoning-heavy tasks, with notably weaker performance on physics and commonsense reasoning.
145
+ <p align="center">
146
+ <img src="./images/all_radar.png" style="zoom:20%;" />
 
 
 
 
 
 
 
 
 
147
  </p>
 
 
148
 
149
+ ## ๐Ÿ… <a name='rb'></a>Reward Model
150
+ We use [ModernBERT](https://huggingface.co/docs/transformers/en/model_doc/modernbert) as the base model to finetune on [MOCHA](https://arxiv.org/abs/2010.03636), [Prometheus-preference](https://huggingface.co/datasets/prometheus-eval/Preference-Collection), [Pedants](https://arxiv.org/abs/2402.11161) to evaluate free-form text generations. We use RewardBert as the reward in GRPO finetuning.
151
+
152
+ #### Method: `compute_score`
153
+ **Parameters**
154
+ - `reference_answer` (list of str): A list of gold (correct) answers to the question
155
+ - `candidate_answer` (str): The answer provided by a candidate that needs to be evaluated
156
 
157
+ **Returns**
158
+ - `tuple`: A tuple of normalized and raw scores.
159
+
160
+ ```python
161
+ from qa_metrics.RewardBert import RewardBert
162
+
163
+ rb = RewardBert(device='cuda')
164
+ reference_answer = "The Frog Prince"
165
+ candidate_answer = "The movie \"The Princess and the Frog\" is loosely based off the Brother Grimm's \"Iron Henry\""
166
+ rb.compute_score(reference_answer, candidate_answer)
167
+ # (0.29113227128982544, 2.1645290851593018)
168
+ ```
169
+
170
+ ## ๐Ÿš€ <a name='training'></a>Training Set up
171
+
172
+ We adopt [Video-R1](https://github.com/tulerfeng/Video-R1) training code to fine-tune the model.
173
+
174
+ Use our formatted JSON files ([synthetic_data_split.json](https://github.com/zli12321/VideoHallu/blob/main/Data/synthetic_data_split.json) and [physbench_train_split.json](https://github.com/zli12321/VideoHallu/blob/main/Data/physbench_train_split.json)) and follow their setup to train a model.
175
+
176
+ ## ๐Ÿ“Š <a name='evaluation_ft'></a>Fine-tuning Results
177
+ We evaluate models fine-tuned on either domain-specific sub-datasets or curriculum-based composite datasets. Results show that models trained only on general real-world videos yield little to no gains on synthetic video understanding. Incorporating general physics data improves physics reasoning, and a curriculum starting with real-world physics followed by synthetic data leads to a 2.8% performance boost.
178
+ <p align="center">
179
+ <img src="images/ft_results.png" style="zoom:20%;" />
180
+ </p>
181
+
182
+ We show results for (a) previous SoTA MLLMs, (b) models fine-tuned on sub-datasets, and (c) models fine-tuned on the full dataset via curriculum learning. Compared to the baseline (Qwen2.5-VL-7B), reinforcement fine-tuning on commonsense and physics data improves models' reasoning and overall performance in synthetic video understanding.
183
+ <p align="center">
184
+ <img src="images/ft_radar.png" style="zoom:20%;" />
185
+ </p>
186
 
187
+ ## <a name='ak'></a>Acknowledgement
188
 
189
  We sincerely appreciate the contributions of the open-source community. The related projects are as follows: [R1-V](https://github.com/Deep-Agent/R1-V) , [DeepSeek-R1](https://github.com/deepseek-ai/DeepSeek-R1) , [Video-R1](https://github.com/tulerfeng/Video-R1), [Qwen-2.5-VL](https://arxiv.org/abs/2502.13923)
190
 
191
+ ## <a name='citations'></a>Citations
192
 
193
  If you find our work helpful for your research, please consider citing our work.
194
 
images/CSR.png ADDED

Git LFS Details

  • SHA256: 8e79fb549716400e4fdb8b116510fbf0c76cb9da3e686ed3c228047a49412b0c
  • Pointer size: 131 Bytes
  • Size of remote file: 941 kB
images/P.png ADDED

Git LFS Details

  • SHA256: 98461c2a922357668b0b4c1c769a22c88fcadcfd449d4870a8bc423019fb48b0
  • Pointer size: 131 Bytes
  • Size of remote file: 678 kB
images/STC.png ADDED

Git LFS Details

  • SHA256: 0609155f9a2efdc4dd4fb0581c50b3e2cff08ea257880a34f12a18ef79c49524
  • Pointer size: 131 Bytes
  • Size of remote file: 892 kB
images/alignment.gif ADDED

Git LFS Details

  • SHA256: 92c61d367657dd4ab0dff75b460bb9230bec7dbd01cbe071cfea5d4ce05b96c0
  • Pointer size: 133 Bytes
  • Size of remote file: 16.4 MB
images/alignment.png ADDED

Git LFS Details

  • SHA256: cd9f5fa2ecab55d739b3ffe3921012cac28eef7244ae58482939b00ed687dcca
  • Pointer size: 131 Bytes
  • Size of remote file: 648 kB
images/all_radar.png ADDED

Git LFS Details

  • SHA256: dbac290b7d31b13e7e21936ee44164478711dfe44a40ce6e98615a76ca2a35e8
  • Pointer size: 131 Bytes
  • Size of remote file: 612 kB
images/all_results.png ADDED

Git LFS Details

  • SHA256: a0724e11e0302858e6d27f4572f88e9cb08513be4d75769c0df42890e4ab1788
  • Pointer size: 131 Bytes
  • Size of remote file: 708 kB
images/ft_radar.png ADDED

Git LFS Details

  • SHA256: 762fc525e3e7fe711931573a2764b08dec6ddcc04e442c912c8b7746dfa6ef53
  • Pointer size: 131 Bytes
  • Size of remote file: 745 kB
images/ft_results.png ADDED

Git LFS Details

  • SHA256: d8ff939863b51866dffe2639cd19f09ad542ab10448eabf9cd4c458f282585ae
  • Pointer size: 131 Bytes
  • Size of remote file: 416 kB
images/legend.png ADDED

Git LFS Details

  • SHA256: df9b8f810f028a188c894a0d6ff4af33f3923122d805eee2a6039f0b7f2d07d1
  • Pointer size: 131 Bytes
  • Size of remote file: 344 kB
images/teaser.png ADDED

Git LFS Details

  • SHA256: 11874028f312ebde6047c9129af3ad30fe85f18ae3d4d45f242caeacf933bf63
  • Pointer size: 132 Bytes
  • Size of remote file: 2.02 MB