ahmedheakl commited on
Commit
057f94d
·
verified ·
1 Parent(s): 10e5e74

Upload 10 files

Browse files
.gitattributes CHANGED
@@ -34,3 +34,10 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
  tokenizer.json filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
  tokenizer.json filter=lfs diff=lfs merge=lfs -text
37
+ assets_hf/ain_can_see.png filter=lfs diff=lfs merge=lfs -text
38
+ assets_hf/AIN.png filter=lfs diff=lfs merge=lfs -text
39
+ assets_hf/Eval_CAMEL.png filter=lfs diff=lfs merge=lfs -text
40
+ assets_hf/qualitative.png filter=lfs diff=lfs merge=lfs -text
41
+ assets_hf/radar_chart.png filter=lfs diff=lfs merge=lfs -text
42
+ assets_hf/toxicity.png filter=lfs diff=lfs merge=lfs -text
43
+ assets_hf/verify_pipeline.png filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,199 +1,259 @@
1
  ---
2
- library_name: transformers
3
- tags: []
 
 
 
 
 
 
 
 
 
4
  ---
5
 
6
- # Model Card for Model ID
7
 
8
- <!-- Provide a quick summary of what the model is/does. -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
 
 
10
 
11
 
12
- ## Model Details
13
-
14
- ### Model Description
15
-
16
- <!-- Provide a longer summary of what this model is. -->
17
-
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
-
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
-
28
- ### Model Sources [optional]
29
-
30
- <!-- Provide the basic links for the model. -->
31
-
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
-
36
- ## Uses
37
-
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
-
40
- ### Direct Use
41
-
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
-
44
- [More Information Needed]
45
-
46
- ### Downstream Use [optional]
47
-
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
-
50
- [More Information Needed]
51
-
52
- ### Out-of-Scope Use
53
-
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
-
56
- [More Information Needed]
57
-
58
- ## Bias, Risks, and Limitations
59
-
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
-
62
- [More Information Needed]
63
-
64
- ### Recommendations
65
-
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
-
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
-
70
- ## How to Get Started with the Model
71
-
72
- Use the code below to get started with the model.
73
-
74
- [More Information Needed]
75
-
76
- ## Training Details
77
-
78
- ### Training Data
79
-
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
-
82
- [More Information Needed]
83
-
84
- ### Training Procedure
85
-
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
-
88
- #### Preprocessing [optional]
89
-
90
- [More Information Needed]
91
-
92
-
93
- #### Training Hyperparameters
94
-
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
-
97
- #### Speeds, Sizes, Times [optional]
98
-
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
-
101
- [More Information Needed]
102
-
103
- ## Evaluation
104
-
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
-
107
- ### Testing Data, Factors & Metrics
108
-
109
- #### Testing Data
110
-
111
- <!-- This should link to a Dataset Card if possible. -->
112
-
113
- [More Information Needed]
114
-
115
- #### Factors
116
-
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
-
119
- [More Information Needed]
120
-
121
- #### Metrics
122
-
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
-
125
- [More Information Needed]
126
-
127
- ### Results
128
-
129
- [More Information Needed]
130
-
131
- #### Summary
132
-
133
-
134
-
135
- ## Model Examination [optional]
136
-
137
- <!-- Relevant interpretability work for the model goes here -->
138
-
139
- [More Information Needed]
140
-
141
- ## Environmental Impact
142
-
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
-
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
-
153
- ## Technical Specifications [optional]
154
-
155
- ### Model Architecture and Objective
156
-
157
- [More Information Needed]
158
-
159
- ### Compute Infrastructure
160
-
161
- [More Information Needed]
162
-
163
- #### Hardware
164
-
165
- [More Information Needed]
166
-
167
- #### Software
168
-
169
- [More Information Needed]
170
-
171
- ## Citation [optional]
172
-
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
-
175
- **BibTeX:**
176
-
177
- [More Information Needed]
178
-
179
- **APA:**
180
-
181
- [More Information Needed]
182
-
183
- ## Glossary [optional]
184
 
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
 
 
 
 
 
186
 
187
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
188
 
189
- ## More Information [optional]
190
 
191
- [More Information Needed]
 
192
 
193
- ## Model Card Authors [optional]
194
 
195
- [More Information Needed]
 
196
 
197
- ## Model Card Contact
198
 
199
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ - ar
6
+ base_model:
7
+ - qwen2-VL-7B
8
+ pipeline_tag: image-text-to-text
9
+ tags:
10
+ - LMM
11
+ - Arabic
12
+ - OCR
13
  ---
14
 
 
15
 
16
+ <div style="display: flex; align-items: center;">
17
+ <img src="assets_hf/AIN.png" width="10%" alt="logo" style="margin-right: 10px;" />
18
+ <h1 style="margin: 0; font-size: 28px;";">AIN: The Arabic INclusive Large Multimodal Model</h1>
19
+ </div>
20
+
21
+ [Ahmed Heakl](https://huggingface.co/ahmedheakl) <sup> * </sup> &nbsp;
22
+ [Sara Ghaboura](https://huggingface.co/SLMLAH) <sup> * </sup> &nbsp;
23
+ [Omkar Thawakar](https://omkarthawakar.github.io) &nbsp;
24
+ [Fahad Shahbaz Khan](https://scholar.google.com/citations?hl=en&user=zvaeYnUAAAAJ) &nbsp;
25
+ [Hisham Cholakkal](https://scholar.google.com/citations?hl=en&user=bZ3YBRcAAAAJ) &nbsp;
26
+ [Rao M. Anwer](https://scholar.google.com/citations?hl=en&user=_KlvMVoAAAAJ) &nbsp;
27
+ [Salman Khan](https://scholar.google.com/citations?hl=en&user=M59O9lkAAAAJ)
28
+ <br>
29
+ <em> <sup> *Equal Contribution </sup> </em>
30
+ <br>
31
+ #### **Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI), UAE**
32
+ [![arXiv](https://img.shields.io/badge/arXiv-2502.00094-3399FF)](https://arxiv.org/abs/2502.00094)
33
+ [![Our Page](https://img.shields.io/badge/Visit-Our%20Page-8C7AFF?style=flat)](https://mbzuai-oryx.github.io/AIN/)
34
+ [![Github](https://img.shields.io/badge/Visit-Our%20Github-9BEDB9?style=flat)](https://github.com/mbzuai-oryx/AIN)
35
+ [![GitHub issues](https://img.shields.io/github/issues/mbzuai-oryx/Camel-Bench?color=FFF359&label=issues&style=flat)](https://github.com/mbzuai-oryx/AIN/issues)
36
+ [![GitHub stars](https://img.shields.io/github/stars/mbzuai-oryx/AIN?color=FF6A07&style=flat)](https://github.com/mbzuai-oryx/AIN/stargazers)
37
+ [![GitHub license](https://img.shields.io/github/license/mbzuai-oryx/Camel-Bench?color=FF6666)](https://github.com/mbzuai-oryx/AIN/blob/main/LICENSE)
38
 
39
+ ---
40
 
41
 
42
+ <div class="abstract-container">
43
+ <h2>Abstract</h2>
44
+ <div class="abstract-content">
45
+ <p>
46
+ Amid the swift progress of large language models (LLMs) and their evolution into large multimodal models (LMMs), significant strides have been made in high-resource languages such as English and Chinese. While Arabic LLMs have seen notable progress, Arabic LMMs remain largely unexplored, often narrowly focusing on a few specific aspects of the language and visual understanding. To bridge this gap, we introduce <b><em>AIN - the Arabic Inclusive Multimodal Model-</em></b> designed to excel across diverse domains.
47
+ AIN is an English-Arabic <b>bilingual LMM</b> designed to excel in English and Arabic, leveraging carefully constructed <b>3.6 million</b> high-quality Arabic-English multimodal data samples. AIN demonstrates state-of-the-art Arabic performance, while also possessing strong English-language visual capabilities.
48
+ </p>
49
+ </div>
50
+ </div>
51
+
52
+
53
+
54
+ ## 🌟 Key Features
55
+ - The **first Arabic-centric inclusive Large Multimodal Model (LMM)** trained on **3.6M samples**.
56
+ - Includes **35% authentic Arabic data** within its Arabic data subset.
57
+ - Achieves **superior performance compared to open- and closed-source models** (e.g., GPT-4o) and open-source models (e.g., Qwen2-VL-7B) across tasks such as OCR and specialized domains.
58
+ - Demonstrates **robust bilingual capabilities** (Arabic/English), **validated** through **comprehensive testing** and **human evaluation** across 17 Arab countries.
59
+ - Exhibits **advanced cultural understanding** and domain expertise in fields such as **medical imaging**, **agriculture**, and **scientific visualization**.
60
+
61
+
62
+ <p align="center">
63
+ <img src="assets_hf/intro_bar.png" width="70%" alt="intro_bar" style="margin-right: 2px";/>
64
+ <h6>
65
+ <em> <b>Figure 1.</b> Comparative performance of AIN-7B against other models across key domains, including OCR & Document Understanding, Remote Sensing, Agricultural Understanding, and overall performance across all domains. </em>
66
+ </h6>
67
+ </p>
68
+
69
+ <p align="center" >
70
+ <img src="assets_hf/radar_chart.png" width="52%" alt="radar_chart" style="margin-right: 2px";/>
71
+ <h6>
72
+ <em> <b>Figure 2.</b> showcases a comprehensive performance analysis of AIN-7B across CAMEL-Bench domains, comparing it with prominent closed-source models as well as open-source counterparts. <strong>OCR:</strong> "OCR & Document Understanding", <strong>Video:</strong> "General Video & Multi-Image Understanding", <strong>RS:</strong> "Remote Sensing Understanding", <strong>CDT:</strong> "Chart, Diagram & Table Understanding", <strong>Agro.:</strong> "Agricultural Image Understanding", <strong>Cultural:</strong> "Cultural-Specific Understanding", <strong>Medical:</strong> "Medical Image Understanding".
73
+ </em>
74
+ </h6>
75
+
76
+ ---
77
+ ## ⚖️ Quantitative Evaluation and Results
78
+ AIN demonstrates state-of-the-art performance across diverse domains, surpassing both open- and closed-source models. Notably, it achieves an aggregate performance score of 63.77%, with significant gains in OCR, remote sensing, and agricultural image understanding.
79
+
80
+ <div align="center" >
81
+ <table>
82
+ <caption>
83
+ <h6>
84
+ <strong>Table 1. Performance comparison of AIN and different closed- and open-source LMMs across CAMEL-Bench domains.</strong>
85
+ <br> <em>Best performance is marked with 🥇; second-best is 🥈.</em>
86
+ <strong>OCR</strong>: "OCR & Document Understanding",
87
+ <strong>Video</strong>: "General Video & Multi-Image Understanding",
88
+ <strong>RS</strong>: "Remote Sensing Understanding",
89
+ <strong>CDT</strong>: "Chart, Diagram & Table Understanding",
90
+ <strong>Agro.</strong>: "Agricultural Image Understanding",
91
+ <strong>Cult.</strong>: "Cultural-Specific Understanding",
92
+ <strong>Med.</strong>: "Medical Image Understanding".
93
+ </h6>
94
+ </caption>
95
+ <thead>
96
+ <tr style="background-color: #e0e0e0;">
97
+ <th>Models</th>
98
+ <th>VQA</th>
99
+ <th>OCR</th>
100
+ <th>Video</th>
101
+ <th>RS</th>
102
+ <th>CDT</th>
103
+ <th>Agro.</th>
104
+ <th>Cult.</th>
105
+ <th>Med.</th>
106
+ <th style="background-color: #d0d0d0;">Total</th>
107
+ </tr>
108
+ </thead>
109
+ <tbody>
110
+ <tr>
111
+ <td>GPT-4o</td>
112
+ <td>🥈55.15</td>
113
+ <td>🥈54.98</td>
114
+ <td>🥇69.65</td>
115
+ <td>🥈27.36</td>
116
+ <td>🥈62.35</td>
117
+ <td>🥈80.75</td>
118
+ <td>🥇80.86</td>
119
+ <td>🥇49.91</td>
120
+ <td style="background-color: #d0d0d0;">🥈60.13</td>
121
+ </tr>
122
+ <tr>
123
+ <td>GPT-4o-mini</td>
124
+ <td>48.83</td>
125
+ <td>39.38</td>
126
+ <td>🥈66.28</td>
127
+ <td>16.93</td>
128
+ <td>56.37</td>
129
+ <td>78.80</td>
130
+ <td>65.92</td>
131
+ <td>🥈47.37</td>
132
+ <td style="background-color: #d0d0d0;">52.49</td>
133
+ </tr>
134
+ <tr>
135
+ <td>Gemini-1.5-Pro</td>
136
+ <td>46.68</td>
137
+ <td>28.68</td>
138
+ <td>42.95</td>
139
+ <td>17.07</td>
140
+ <td>47.06</td>
141
+ <td>72.14</td>
142
+ <td>56.24</td>
143
+ <td>33.78</td>
144
+ <td style="background-color: #d0d0d0;">52.38</td>
145
+ </tr>
146
+ <tr>
147
+ <td>Gemini-1.5-flash</td>
148
+ <td>45.59</td>
149
+ <td>27.58</td>
150
+ <td>53.31</td>
151
+ <td>14.95</td>
152
+ <td>48.26</td>
153
+ <td>76.07</td>
154
+ <td>46.54</td>
155
+ <td>42.87</td>
156
+ <td style="background-color: #d0d0d0;">44.40</td>
157
+ </tr>
158
+ <tr>
159
+ <td>InternVL-8B </td>
160
+ <td>30.41 </td>
161
+ <td>15.91 </td>
162
+ <td>51.42 </td>
163
+ <td>5.36 </td>
164
+ <td>30.27 </td>
165
+ <td>44.47 </td>
166
+ <td>20.88 </td>
167
+ <td>29.48 </td>
168
+ <td style="background-color: #d0d0d0;">28.52 </td>
169
+ </tr>
170
+ <tr>
171
+ <td>InternVL2.5-1B </td>
172
+ <td>27.22 </td>
173
+ <td>19.45 </td>
174
+ <td>38.20 </td>
175
+ <td>3.39 </td>
176
+ <td>30.75 </td>
177
+ <td>39.53 </td>
178
+ <td>35.68 </td>
179
+ <td>21.27 </td>
180
+ <td style="background-color: #d0d0d0;">26.94 </td>
181
+ </tr>
182
+ <tr>
183
+ <td>Qwen-VL-2B </td>
184
+ <td>41.02 </td>
185
+ <td>22.93 </td>
186
+ <td>38.90 </td>
187
+ <td>12.56 </td>
188
+ <td>27.83 </td>
189
+ <td>52.02 </td>
190
+ <td>34.28 </td>
191
+ <td>29.12 </td>
192
+ <td style="background-color: #d0d0d0;">32.33 </td>
193
+ </tr>
194
+ <tr>
195
+ <td>AIN-7B <em>(ours)</em> </td>
196
+ <td>🥇56.78 </td>
197
+ <td>🥇72.35 </td>
198
+ <td>64.09 </td>
199
+ <td>🥇45.92 </td>
200
+ <td>🥇64.10 </td>
201
+ <td>🥇85.05 </td>
202
+ <td>🥈78.09 </td>
203
+ <td>43.77 </td>
204
+ <td style="background-color: #d0d0d0;">🏆63.77 </td>
205
+ </tr>
206
+ </tbody>
207
+ </table>
208
+ </div>
209
+
210
+ ---
211
+ ## 🎯 Qualitative Evaluation
212
+ The qualitative evaluation showcases AIN's advanced capabilities in handling diverse, complex tasks, including OCR, medical imaging, remote sensing, and cultural-specific understanding, with remarkable precision and contextual relevance. Unlike GPT-4o and LLaVA, AIN demonstrates superior performance in identifying intricate details and maintaining accuracy across varied query formats and multi-domain challenges.
 
213
 
214
+ <div align="center">
215
+ <img src="assets_hf/qualitative.png" width="75%" alt="qualitative" />
216
+ <h6>
217
+ <em> <b>Figure 3.</b> Qualitative examples showcasing AIN-7B’s capabilities across various domains, including general VQA, OCR & Document Understanding, Remote Sensing, Medical Imaging, Agricultural Understanding, and Cultural-Specific tasks. </em>
218
+ </h6>
219
+ </div>
220
 
221
+ ---
222
+ ## 🧐 Data Verification and Toxicity Filtering
223
+ A multi-step verification pipeline was implemented to ensure high-quality translations and safe visual data. Translation accuracy was assessed through human evaluation, where native Arabic speakers rated outputs against reference translations, and semantic similarity checks were conducted using **LaBSE**. Additionally, translated samples were reverse-translated and validated using **BLEU, METEOR, and ROUGE scores** to measure correctness, correlation, and overlap. For visual data, toxicity filtering was applied using **LLavaGuard’s safety policies and GPT-4o**, identifying and removing unsafe content related to violence, substance abuse, and harmful imagery, ensuring compliance with ethical AI standards.
224
+
225
+ <p align="center">
226
+ <img src="assets_hf/verify_pipeline.png" width="75%" alt="verify" style="margin-right: 2px";/>
227
+ <h6>
228
+ <em> <b>Figure 4.</b> Data verification and filtering pipeline for textual and visual data, ensuring high-quality training data through semantic similarity checks, translation quality evaluations, and toxicity screening for safety compliance. </em>
229
+ </h6>
230
+ </p>
231
+ <p align="center">
232
+ <img src="assets_hf/toxicity.png" width=48%" alt="verify" style="margin-right: 2px";/>
233
+ <h6>
234
+ <em> <b>Figure 5.</b> Distribution of visual data toxicity filtering results, showing that 95% of the data is classified as safe, while 5% is identified as unsafe due to categories like weapons or substance abuse, violence, and animal cruelty. </em>
235
+ </h6>
236
+ </p>
237
 
238
+ ---
239
 
240
+ ## 🔒 License
241
+ This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
242
 
 
243
 
244
+ ## 💬 Contact us
245
+ For questions or suggestions, feel free to reach out to us on [GitHub Discussions](https://github.com/mbzuai-oryx/AIN/discussions).
246
 
247
+ ---
248
 
249
+ If you use AIN in your research, please cite our work as follows:
250
+
251
+ ```
252
+ @misc{heakl2025ainarabicinclusivelarge,
253
+ title={AIN: The Arabic INclusive Large Multimodal Model},
254
+ author={Ahmed Heakl and Sara Ghaboura and Omkar Thawkar and Fahad Shahbaz Khan and Hisham Cholakkal and Rao Muhammad Anwer and Salman Khan},
255
+ year={2025},
256
+ eprint={2502.00094},
257
+ url={https://arxiv.org/abs/2502.00094},
258
+ ```
259
+ ---
assets_hf/.DS_Store ADDED
Binary file (6.15 kB). View file
 
assets_hf/AIN.png ADDED

Git LFS Details

  • SHA256: 2688a836c3fd0abc0c3cf734ed9550592c9cb276dfc80b3185b66c00bead907c
  • Pointer size: 131 Bytes
  • Size of remote file: 758 kB
assets_hf/Eval_CAMEL.png ADDED

Git LFS Details

  • SHA256: 61562a4a14c1efec1967f127f793746e82f5e997de27af5b789a3e717219b182
  • Pointer size: 131 Bytes
  • Size of remote file: 159 kB
assets_hf/ain_can_see.png ADDED

Git LFS Details

  • SHA256: ee835a86e0f920155aab4d8a6f33b2eb370611157a3dd18770f7b90783d87986
  • Pointer size: 131 Bytes
  • Size of remote file: 505 kB
assets_hf/intro_bar.png ADDED
assets_hf/qualitative.png ADDED

Git LFS Details

  • SHA256: 4ffc3aad5b0269174318094029e4c2222baffb925544070b15242578bb4edc2c
  • Pointer size: 131 Bytes
  • Size of remote file: 487 kB
assets_hf/radar_chart.png ADDED

Git LFS Details

  • SHA256: 030ee23b6297b70c3cc729b74d62937fb796d910ad32883bfe0e80c9f0497dc8
  • Pointer size: 131 Bytes
  • Size of remote file: 220 kB
assets_hf/toxicity.png ADDED

Git LFS Details

  • SHA256: 465b28e4f4a87c27eb74fcb6ba88326b75b0d79214fe73ef443f0f01e205826d
  • Pointer size: 131 Bytes
  • Size of remote file: 414 kB
assets_hf/verify_pipeline.png ADDED

Git LFS Details

  • SHA256: ed49180a5e33b6a27965611495b3cdaeee47b0570167aa0788b2c09c66106540
  • Pointer size: 131 Bytes
  • Size of remote file: 156 kB