license: apache-2.0
Introduction
MRAMG-Bench is a comprehensive multimodal benchmark with six carefully curated English datasets. The benchmark comprises 4,346 documents, 14,190 images, and 4,800 QA pairs, sourced from three domains—Web Data, Academic Papers, and Lifestyle Data. We believe it provides a robust evaluation framework that advances research in Multimodal Retrieval-Augmented Multimodal Generation (MRAMG).
Data Structure
The dataset consists of three major components: Documents, Multimodal QA pairs, and Images. Each component is structured across six different sub-datasets, ensuring a diverse and comprehensive collection of multimodal content.
1. Document Collection
The dataset includes six JSONL files, each corresponding to a different data source:
File Name | Description | Num |
---|---|---|
doc_wit.jsonl |
MRAMG-Wit documents | 639 |
doc_wiki.jsonl |
MRAMG-Wiki documents | 538 |
doc_web.jsonl |
MRAMG-Web documents | 1500 |
doc_arxiv.jsonl |
MRAMG-Arxiv documents | 101 |
doc_recipe.jsonl |
MRAMG-Recipe documents | 1528 |
doc_manual.jsonl |
MRAMG-Manual documents | 40 |
Field Definitions
id
(int): Unique identifier for the document.content
(str): The main textual content of the document. If an image is referenced,<PIC>
is used as a placeholder indicating its position in the text.images_list
(list[int]): A list of image IDs associated with the document.
2. Multimodal QA pairs
The MQA component consists of six JSONL files, each corresponding to a different dataset:
File Name | Description | Num |
---|---|---|
wit_mqa.jsonl |
MRAMG-Wit multimodal QA pairs | 600 |
wiki_mqa.jsonl |
MRAMG-Wiki multimodal QA pairs | 500 |
web_mqa.jsonl |
MRAMG-Web multimodal QA pairs | 750 |
arxiv_mqa.jsonl |
MRAMG-Arxiv multimodal QA pairs | 200 |
recipe_mqa.jsonl |
MRAMG-Recipe multimodal QA pairs | 2360 |
manual_mqa.jsonl |
MRAMG-Manual multimodal QA pairs | 390 |
Each entry contains a question ID, a question, provenance documents, a ground truth answer, and a list of image IDs associated with the answer.
Field Definitions
id
(str): Unique identifier for the question.question
(str): The question text.provenance
(list[int]): A list of document IDs that serve as supporting evidence for the answer.ground_truth
(str): The correct answer, which may contain<PIC>
placeholders indicating relevant images.images_list
(list[int]): A list of image IDs directly associated with the answer.
3. Image Metadata
The dataset contains a collection of images stored under the directory:
IMAGE/images/
Additionally, metadata about these images is provided in six JSON files, corresponding to each dataset:
File Name | Description | Num |
---|---|---|
wit_imgs_collection.json |
Image metadata from MRAMG-Wit | 639 |
wiki_imgs_collection.json |
Image metadata from MRAMG-Web | 538 |
web_imgs_collection.json |
Image metadata from MRAMG-Wiki | 1500 |
arxiv_imgs_collection.json |
Image metadata from MRAMG-Arxiv | 337 |
recipe_imgs_collection.json |
Image metadata from MRAMG-Recipe | 8569 |
manual_imgs_collection.json |
Image metadata from MRAMG-Manual | 2607 |
Field Definitions
id
(int): Unique identifier for the image.image_url
(str): The URL where the image is originally sourced from.image_path
(str): The filename of the image as stored in the dataset.image_caption
(str): A textual description or caption of the image.
Contact
If you have any questions or suggestions, please contact [email protected]
Citation Information
If you use this benchmark in your research, please cite the benchmark as follows:
@article{yu2025mramg,
title={MRAMG-Bench: A BeyondText Benchmark for Multimodal Retrieval-Augmented Multimodal Generation},
author={Yu, Qinhan and Xiao, Zhiyou and Li, Binghui and Wang, Zhengren and Chen, Chong and Zhang, Wentao},
journal={arXiv preprint arXiv:2502.04176},
year={2025}
}