Datasets:

Modalities:
Image
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
License:
Dataset Viewer
Auto-converted to Parquet
Search is not available for this dataset
image
imagewidth (px)
3.84k
3.84k
label
class label
7 classes
1001_tagging
1001_tagging
1001_tagging
1001_tagging
1001_tagging
1001_tagging
5002_legoassemble
5002_legoassemble
5002_legoassemble
5002_legoassemble
5002_legoassemble
0001_fencing
0001_fencing
0001_fencing
0001_fencing
0001_fencing
4002_basketball
4002_basketball
4002_basketball
4002_basketball
4002_basketball
3001_volleyball
3001_volleyball
3001_volleyball
3001_volleyball
3001_volleyball
6029_badminton
6029_badminton
6029_badminton
6029_badminton
6029_badminton
2001_tennis
2001_tennis
2001_tennis
2001_tennis
2001_tennis

Seeing from Another Perspective: Evaluating Multi-View Understanding in MLLMs

Dataset Card for All-Angles Bench

Dataset Description

The dataset presents a comprehensive benchmark consisting of over 2,100 human-annotated multi-view question-answer (QA) pairs, spanning 90 real-world scenes. Each scene is captured from multiple viewpoints, providing diverse perspectives and context for the associated questions.

Dataset Sources

  • EgoHumans - Egocentric multi-view human activity understanding dataset
  • Ego-Exo4D - Large-scale egocentric and exocentric video dataset for multi-person interaction understanding

Usage

from datasets import load_dataset

dataset = load_dataset("ch-chenyu/All-Angles-Bench")

We provide the image files for the EgoHumans dataset. For the Ego-Exo4D dataset, due to licensing restrictions, you will need to first sign the license agreement from the official Ego-Exo4D repository at https://ego4ddataset.com/egoexo-license/. After signing the license, you can download the dataset and then use the preprocessing scripts provided in our GitHub repository to extract the corresponding images.

Dataset Structure

The JSON data contains the following key-value pairs:

Key Type Description
index Integer Unique identifier for the data entry (e.g. 1221)
folder String Directory name where the scene is stored (e.g. "05_volleyball")
category String Task category (e.g. "counting")
pair_idx String Index of a corresponding paired question (if applicable)
image_path List Array of input image paths
question String Natural language query about the scene
A/B/C String Multiple choice options
answer String Correct option label (e.g. "B")
sourced_dataset String Source dataset name (e.g. "EgoHumans")

Citation

@article{yeh2025seeing,
  title={Seeing from Another Perspective: Evaluating Multi-View Understanding in MLLMs},
  author={Chun-Hsiao Yeh, Chenyu Wang, Shengbang Tong, Ta-Ying Cheng, Ruoyu Wang, Tianzhe Chu, Yuexiang Zhai, Yubei Chen, Shenghua Gao and Yi Ma},
  journal={arXiv preprint arXiv:2504.15280},
  year={2025}
}

Acknowledgements

You may refer to related work that serves as foundations for our framework and code repository, EgoHumans, Ego-Exo4D, VLMEvalKit. Thanks for their wonderful work and data.

Downloads last month
60