Datasets:

ArXiv:
License:
nielsr HF Staff commited on
Commit
0b3e076
·
verified ·
1 Parent(s): 308347a

Add task category and link to paper, code

Browse files

This PR adds the `image-text-to-text` task category to enhance dataset discoverability. It also links the dataset to the associated paper and code repository, providing users with easy access to related resources.

Files changed (1) hide show
  1. README.md +51 -50
README.md CHANGED
@@ -1,11 +1,15 @@
1
  ---
2
  license: apache-2.0
 
 
3
  ---
4
 
 
5
 
 
6
 
7
- ## Introduction
8
- MRAMG-Bench is a comprehensive multimodal benchmark with six carefully curated English datasets. The benchmark comprises 4,346 documents, 14,190 images, and 4,800 QA pairs, sourced from three domains—Web Data, Academic Papers, and Lifestyle Data. We believe it provides a robust evaluation framework that advances research in Multimodal Retrieval-Augmented Multimodal Generation (MRAMG).
9
 
10
  ## **Data Structure**
11
 
@@ -14,86 +18,83 @@ The dataset consists of three major components: **Documents, Multimodal QA pairs
14
  ---
15
 
16
  ### **1. Document Collection**
17
- The dataset includes **six JSONL files**, each corresponding to a different data source:
18
-
19
- | File Name | Description | Num |
20
- |--------------------|-------------|-------------|
21
- | `doc_wit.jsonl` |MRAMG-Wit documents | 639 |
22
- | `doc_wiki.jsonl` | MRAMG-Wiki documents | 538 |
23
- | `doc_web.jsonl` | MRAMG-Web documents | 1500 |
24
- | `doc_arxiv.jsonl` | MRAMG-Arxiv documents | 101 |
25
- | `doc_recipe.jsonl`| MRAMG-Recipe documents | 1528 |
26
- | `doc_manual.jsonl`| MRAMG-Manual documents | 40 |
27
 
 
28
 
 
 
 
 
 
 
 
 
29
 
30
  ##### **Field Definitions**
31
- - **`id` (int)**: Unique identifier for the document.
32
- - **`content` (str)**: The main textual content of the document. If an image is referenced, `<PIC>` is used as a placeholder indicating its position in the text.
33
- - **`images_list` (list[int])**: A list of **image IDs** associated with the document.
34
 
35
- ---
 
 
36
 
 
37
 
38
  ### **2. Multimodal QA pairs**
39
- The **MQA component** consists of **six JSONL files**, each corresponding to a different dataset:
40
 
41
- | File Name | Description | Num |
42
- |--------------------|-------------|-------------|
43
- | `wit_mqa.jsonl` |MRAMG-Wit multimodal QA pairs | 600 |
44
- | `wiki_mqa.jsonl` | MRAMG-Wiki multimodal QA pairs | 500 |
45
- | `web_mqa.jsonl` | MRAMG-Web multimodal QA pairs | 750 |
46
- | `arxiv_mqa.jsonl` | MRAMG-Arxiv multimodal QA pairs | 200 |
47
- | `recipe_mqa.jsonl` | MRAMG-Recipe multimodal QA pairs | 2360 |
48
- | `manual_mqa.jsonl` | MRAMG-Manual multimodal QA pairs | 390 |
49
 
 
 
 
 
 
 
 
 
50
 
51
  Each entry contains **a question ID, a question, provenance documents, a ground truth answer, and a list of image IDs associated with the answer**.
52
 
53
-
54
  #### **Field Definitions**
55
- - **`id` (str)**: Unique identifier for the question.
56
- - **`question` (str)**: The question text.
57
- - **`provenance` (list[int])**: A list of **document IDs** that serve as supporting evidence for the answer.
58
- - **`ground_truth` (str)**: The correct answer, which may contain `<PIC>` placeholders indicating relevant images.
59
- - **`images_list` (list[int])**: A list of **image IDs** directly associated with the answer.
 
60
 
61
  ---
62
 
63
  ### **3. Image Metadata**
64
- The dataset contains **a collection of images** stored under the directory:
65
 
66
- ```
67
 
 
68
  IMAGE/images/
69
  ```
70
 
71
  Additionally, metadata about these images is provided in **six JSON files**, corresponding to each dataset:
72
 
73
- | File Name | Description | Num |
74
- |--------------------|-------------|-------------|
75
- | `wit_imgs_collection.json` | Image metadata from MRAMG-Wit | 639 |
76
- | `wiki_imgs_collection.json` | Image metadata from MRAMG-Web | 538 |
77
- | `web_imgs_collection.json` | Image metadata from MRAMG-Wiki | 1500 |
78
- | `arxiv_imgs_collection.json` | Image metadata from MRAMG-Arxiv | 337 |
79
- | `recipe_imgs_collection.json` | Image metadata from MRAMG-Recipe | 8569 |
80
- | `manual_imgs_collection.json` | Image metadata from MRAMG-Manual | 2607 |
81
-
82
-
83
 
84
  #### **Field Definitions**
85
- - **`id` (int)**: Unique identifier for the image.
86
- - **`image_url` (str)**: The URL where the image is originally sourced from.
87
- - **`image_path` (str)**: The filename of the image as stored in the dataset.
88
- - **`image_caption` (str)**: A textual description or caption of the image.
89
-
90
-
91
 
 
 
 
 
92
 
93
  ## Contact
 
94
  If you have any questions or suggestions, please contact [email protected]
95
 
96
  ## Citation Information
 
97
  If you use this benchmark in your research, please cite the benchmark as follows:
98
 
99
  ```
@@ -103,4 +104,4 @@ If you use this benchmark in your research, please cite the benchmark as follows
103
  journal={arXiv preprint arXiv:2502.04176},
104
  year={2025}
105
  }
106
- ```
 
1
  ---
2
  license: apache-2.0
3
+ task_categories:
4
+ - image-text-to-text
5
  ---
6
 
7
+ ## Introduction
8
 
9
+ MRAMG-Bench is a comprehensive multimodal benchmark with six carefully curated English datasets. The benchmark comprises 4,346 documents, 14,190 images, and 4,800 QA pairs, sourced from three domains—Web Data, Academic Papers, and Lifestyle Data. We believe it provides a robust evaluation framework that advances research in Multimodal Retrieval-Augmented Multimodal Generation (MRAMG).
10
 
11
+ **Paper:** [MRAMG-Bench: A Comprehensive Benchmark for Advancing Multimodal Retrieval-Augmented Multimodal Generation](https://huggingface.co/papers/2502.04176)
12
+ **Code:** https://github.com/MRAMG-Bench/MRAMG
13
 
14
  ## **Data Structure**
15
 
 
18
  ---
19
 
20
  ### **1. Document Collection**
 
 
 
 
 
 
 
 
 
 
21
 
22
+ The dataset includes **six JSONL files**, each corresponding to a different data source:
23
 
24
+ | File Name | Description | Num |
25
+ | -------------------- | ---------------------- | ---- |
26
+ | `doc_wit.jsonl` | MRAMG-Wit documents | 639 |
27
+ | `doc_wiki.jsonl` | MRAMG-Wiki documents | 538 |
28
+ | `doc_web.jsonl` | MRAMG-Web documents | 1500 |
29
+ | `doc_arxiv.jsonl` | MRAMG-Arxiv documents | 101 |
30
+ | `doc_recipe.jsonl` | MRAMG-Recipe documents | 1528 |
31
+ | `doc_manual.jsonl` | MRAMG-Manual documents | 40 |
32
 
33
  ##### **Field Definitions**
 
 
 
34
 
35
+ - **`id` (int)**: Unique identifier for the document.
36
+ - **`content` (str)**: The main textual content of the document. If an image is referenced, `<PIC>` is used as a placeholder indicating its position in the text.
37
+ - **`images_list` (list[int])**: A list of **image IDs** associated with the document.
38
 
39
+ ---
40
 
41
  ### **2. Multimodal QA pairs**
 
42
 
43
+ The **MQA component** consists of **six JSONL files**, each corresponding to a different dataset:
 
 
 
 
 
 
 
44
 
45
+ | File Name | Description | Num |
46
+ | -------------------- | -------------------------------------- | ---- |
47
+ | `wit_mqa.jsonl` | MRAMG-Wit multimodal QA pairs | 600 |
48
+ | `wiki_mqa.jsonl` | MRAMG-Wiki multimodal QA pairs | 500 |
49
+ | `web_mqa.jsonl` | MRAMG-Web multimodal QA pairs | 750 |
50
+ | `arxiv_mqa.jsonl` | MRAMG-Arxiv multimodal QA pairs | 200 |
51
+ | `recipe_mqa.jsonl` | MRAMG-Recipe multimodal QA pairs | 2360 |
52
+ | `manual_mqa.jsonl` | MRAMG-Manual multimodal QA pairs | 390 |
53
 
54
  Each entry contains **a question ID, a question, provenance documents, a ground truth answer, and a list of image IDs associated with the answer**.
55
 
 
56
  #### **Field Definitions**
57
+
58
+ - **`id` (str)**: Unique identifier for the question.
59
+ - **`question` (str)**: The question text.
60
+ - **`provenance` (list[int])**: A list of **document IDs** that serve as supporting evidence for the answer.
61
+ - **`ground_truth` (str)**: The correct answer, which may contain `<PIC>` placeholders indicating relevant images.
62
+ - **`images_list` (list[int])**: A list of **image IDs** directly associated with the answer.
63
 
64
  ---
65
 
66
  ### **3. Image Metadata**
 
67
 
68
+ The dataset contains **a collection of images** stored under the directory:
69
 
70
+ ```
71
  IMAGE/images/
72
  ```
73
 
74
  Additionally, metadata about these images is provided in **six JSON files**, corresponding to each dataset:
75
 
76
+ | File Name | Description | Num |
77
+ | -------------------------- | ------------------------------ | ---- |
78
+ | `wit_imgs_collection.json` | Image metadata from MRAMG-Wit | 639 |
79
+ | `wiki_imgs_collection.json` | Image metadata from MRAMG-Web | 538 |
80
+ | `web_imgs_collection.json` | Image metadata from MRAMG-Wiki | 1500 |
81
+ | `arxiv_imgs_collection.json` | Image metadata from MRAMG-Arxiv | 337 |
82
+ | `recipe_imgs_collection.json`| Image metadata from MRAMG-Recipe | 8569 |
83
+ | `manual_imgs_collection.json`| Image metadata from MRAMG-Manual | 2607 |
 
 
84
 
85
  #### **Field Definitions**
 
 
 
 
 
 
86
 
87
+ - **`id` (int)**: Unique identifier for the image.
88
+ - **`image_url` (str)**: The URL where the image is originally sourced from.
89
+ - **`image_path` (str)**: The filename of the image as stored in the dataset.
90
+ - **`image_caption` (str)**: A textual description or caption of the image.
91
 
92
  ## Contact
93
+
94
  If you have any questions or suggestions, please contact [email protected]
95
 
96
  ## Citation Information
97
+
98
  If you use this benchmark in your research, please cite the benchmark as follows:
99
 
100
  ```
 
104
  journal={arXiv preprint arXiv:2502.04176},
105
  year={2025}
106
  }
107
+ ```