holylovenia commited on
Commit
78eded3
·
verified ·
1 Parent(s): 59bb02a

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +275 -0
README.md ADDED
@@ -0,0 +1,275 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+ license: cc0-1.0
4
+ language:
5
+ - war
6
+ - ceb
7
+ - min
8
+ - vie
9
+ - ilo
10
+ - tgl
11
+ - lao
12
+ - khm
13
+ - mya
14
+ - jav
15
+ - ind
16
+ - tha
17
+ - sun
18
+ - zlm
19
+ pretty_name: Oscar 2201
20
+ task_categories:
21
+ - self-supervised-pretraining
22
+ tags:
23
+ - self-supervised-pretraining
24
+ ---
25
+
26
+ OSCAR or Open Super-large Crawled Aggregated coRpus is a huge multilingual corpus
27
+ obtained by language classification and filtering of the Common Crawl corpus using
28
+ the ungoliant architecture. Data is distributed by language in both original and
29
+ deduplicated form.
30
+
31
+
32
+ ## Languages
33
+
34
+ war, ceb, min, vie, ilo, tgl, lao, khm, mya, jav, ind, tha, sun, zlm
35
+
36
+ ## Supported Tasks
37
+
38
+ Self Supervised Pretraining
39
+
40
+ ## Dataset Usage
41
+ ### Using `datasets` library
42
+ ```
43
+ from datasets import load_dataset
44
+ dset = datasets.load_dataset("SEACrowd/oscar_2201", trust_remote_code=True)
45
+ ```
46
+ ### Using `seacrowd` library
47
+ ```import seacrowd as sc
48
+ # Load the dataset using the default config
49
+ dset = sc.load_dataset("oscar_2201", schema="seacrowd")
50
+ # Check all available subsets (config names) of the dataset
51
+ print(sc.available_config_names("oscar_2201"))
52
+ # Load the dataset using a specific config
53
+ dset = sc.load_dataset_by_config_name(config_name="<config_name>")
54
+ ```
55
+
56
+ More details on how to load the `seacrowd` library can be found [here](https://github.com/SEACrowd/seacrowd-datahub?tab=readme-ov-file#how-to-use).
57
+
58
+
59
+ ## Dataset Homepage
60
+
61
+ [https://huggingface.co/datasets/oscar-corpus/OSCAR-2201](https://huggingface.co/datasets/oscar-corpus/OSCAR-2201)
62
+
63
+ ## Dataset Version
64
+
65
+ Source: 2022.1.0. SEACrowd: 2024.06.20.
66
+
67
+ ## Dataset License
68
+
69
+ Creative Commons Zero v1.0 Universal (cc0-1.0)
70
+
71
+ ## Citation
72
+
73
+ If you are using the **Oscar 2201** dataloader in your work, please cite the following:
74
+ ```
75
+ @inproceedings{abadji2022cleaner,
76
+ author = {Julien Abadji and
77
+ Pedro Javier Ortiz Su{'{a}}rez and
78
+ Laurent Romary and
79
+ Beno{\^{\i}}t Sagot},
80
+ title = {Towards a Cleaner Document-Oriented Multilingual Crawled Corpus},
81
+ booktitle = {Proceedings of the Thirteenth Language Resources and Evaluation Conference,
82
+ {LREC} 2022, Marseille, France, 20-25 June 2022},
83
+ pages = {4344--4355},
84
+ publisher = {European Language Resources Association},
85
+ year = {2022},
86
+ url = {https://aclanthology.org/2022.lrec-1.463},
87
+ }
88
+
89
+ @inproceedings{abadji2021ungoliant,
90
+ author = {Julien Abadji and
91
+ Pedro Javier Ortiz Su{'a}rez and
92
+ Laurent Romary and
93
+ Beno{\^i}t Sagot},
94
+ title = {Ungoliant: An optimized pipeline for the generation of a very large-scale multilingual web corpus},
95
+ series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora
96
+ (CMLC-9) 2021. Limerick, 12 July 2021 (Online-Event)},
97
+ editor = {Harald L{"u}ngen and
98
+ Marc Kupietz and
99
+ Piotr Bański and
100
+ Adrien Barbaresi and
101
+ Simon Clematide and
102
+ Ines Pisetta},
103
+ publisher = {Leibniz-Institut f{"u}r Deutsche Sprache},
104
+ address = {Mannheim},
105
+ doi = {10.14618/ids-pub-10468},
106
+ url = {https://nbn-resolving.org/urn:nbn:de:bsz:mh39-104688},
107
+ pages = {1 -- 9},
108
+ year = {2021},
109
+ abstract = {Since the introduction of large language models in Natural Language
110
+ Processing, large raw corpora have played a crucial role in Computational Linguistics.
111
+ However, most of these large raw corpora are either available only for English or not
112
+ available to the general public due to copyright issues. Nevertheless, there are some
113
+ examples of freely available multilingual corpora for training Deep Learning NLP
114
+ models, such as the OSCAR and Paracrawl corpora. However, they have quality issues,
115
+ especially for low-resource languages. Moreover, recreating or updating these corpora
116
+ is very complex. In this work, we try to reproduce and improve the goclassy pipeline
117
+ used to create the OSCAR corpus. We propose a new pipeline that is faster, modular,
118
+ parameterizable, and well documented. We use it to create a corpus similar to OSCAR
119
+ but larger and based on recent data. Also, unlike OSCAR, the metadata information is
120
+ at the document level. We release our pipeline under an open source license and
121
+ publish the corpus under a research-only license.},
122
+ language = {en}
123
+ }
124
+
125
+ @article{kreutzer2022quality,
126
+ title = {Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets},
127
+ author = {Kreutzer, Julia and
128
+ Caswell, Isaac and
129
+ Wang, Lisa and
130
+ Wahab, Ahsan and
131
+ van Esch, Daan and
132
+ Ulzii-Orshikh, Nasanbayar and
133
+ Tapo, Allahsera and
134
+ Subramani, Nishant and
135
+ Sokolov, Artem and
136
+ Sikasote, Claytone and
137
+ Setyawan, Monang and
138
+ Sarin, Supheakmungkol and
139
+ Samb, Sokhar and
140
+ Sagot, Beno{\^\i}t and
141
+ Rivera, Clara and
142
+ Rios, Annette and
143
+ Papadimitriou, Isabel and
144
+ Osei, Salomey and
145
+ Suarez, Pedro Ortiz and
146
+ Orife, Iroro and
147
+ Ogueji, Kelechi and
148
+ Rubungo, Andre Niyongabo and
149
+ Nguyen, Toan Q. and
150
+ M{"u}ller, Mathias and
151
+ M{"u}ller, Andr{'e} and
152
+ Muhammad, Shamsuddeen Hassan and
153
+ Muhammad, Nanda and
154
+ Mnyakeni, Ayanda and
155
+ Mirzakhalov, Jamshidbek and
156
+ Matangira, Tapiwanashe and
157
+ Leong, Colin and
158
+ Lawson, Nze and
159
+ Kudugunta, Sneha and
160
+ Jernite, Yacine and
161
+ Jenny, Mathias and
162
+ Firat, Orhan and
163
+ Dossou, Bonaventure F. P. and
164
+ Dlamini, Sakhile and
165
+ de Silva, Nisansa and
166
+ {\c{C}}abuk Ball{\i}, Sakine and
167
+ Biderman, Stella and
168
+ Battisti, Alessia and
169
+ Baruwa, Ahmed and
170
+ Bapna, Ankur and
171
+ Baljekar, Pallavi and
172
+ Azime, Israel Abebe and
173
+ Awokoya, Ayodele and
174
+ Ataman, Duygu and
175
+ Ahia, Orevaoghene and
176
+ Ahia, Oghenefego and
177
+ Agrawal, Sweta and
178
+ Adeyemi, Mofetoluwa},
179
+ editor = {Roark, Brian and
180
+ Nenkova, Ani},
181
+ journal = {Transactions of the Association for Computational Linguistics},
182
+ volume = {10},
183
+ year = {2022},
184
+ address = {Cambridge, MA},
185
+ publisher = {MIT Press},
186
+ url = {https://aclanthology.org/2022.tacl-1.4},
187
+ doi = {10.1162/tacl_a_00447},
188
+ pages = {50--72},
189
+ abstract = {With the success of large-scale pre-training and multilingual modeling in
190
+ Natural Language Processing (NLP), recent years have seen a proliferation of large,
191
+ Web-mined text datasets covering hundreds of languages. We manually audit the quality
192
+ of 205 language-specific corpora released with five major public datasets (CCAligned,
193
+ ParaCrawl, WikiMatrix, OSCAR, mC4). Lower-resource corpora have systematic issues: At
194
+ least 15 corpora have no usable text, and a significant fraction contains less than
195
+ 50{\%} sentences of acceptable quality. In addition, many are mislabeled or use
196
+ nonstandard/ambiguous language codes. We demonstrate that these issues are easy to
197
+ detect even for non-proficient speakers, and supplement the human audit with automatic
198
+ analyses. Finally, we recommend techniques to evaluate and improve multilingual
199
+ corpora and discuss potential risks that come with low-quality data releases.},
200
+ }
201
+
202
+ @inproceedings{ortizsuarez2020monolingual,
203
+ title = {A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages},
204
+ author = {Ortiz Su{'a}rez, Pedro Javier and
205
+ Romary, Laurent and
206
+ Sagot, Benoit},
207
+ booktitle = {Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
208
+ month = {jul},
209
+ year = {2020},
210
+ address = {Online},
211
+ publisher = {Association for Computational Linguistics},
212
+ url = {https://www.aclweb.org/anthology/2020.acl-main.156},
213
+ pages = {1703--1714},
214
+ abstract = {We use the multilingual OSCAR corpus, extracted from Common Crawl via
215
+ language classification, filtering and cleaning, to train monolingual contextualized
216
+ word embeddings (ELMo) for five mid-resource languages. We then compare the
217
+ performance of OSCAR-based and Wikipedia-based ELMo embeddings for these languages on
218
+ the part-of-speech tagging and parsing tasks. We show that, despite the noise in the
219
+ Common-Crawl-based OSCAR data, embeddings trained on OSCAR perform much better than
220
+ monolingual embeddings trained on Wikipedia. They actually equal or improve the
221
+ current state of the art in tagging and parsing for all five languages. In particular,
222
+ they also improve over multilingual Wikipedia-based contextual embeddings
223
+ (multilingual BERT), which almost always constitutes the previous state of the art,
224
+ thereby showing that the benefit of a larger, more diverse corpus surpasses the
225
+ cross-lingual benefit of multilingual embedding architectures.},
226
+ }
227
+
228
+ @inproceedings{ortizsuarez2019asynchronous,
229
+ author = {Pedro Javier {Ortiz Su{'a}rez} and
230
+ Benoit Sagot and
231
+ Laurent Romary},
232
+ title = {Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures},
233
+ series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora
234
+ (CMLC-7) 2019. Cardiff, 22nd July 2019},
235
+ editor = {Piotr Bański and
236
+ Adrien Barbaresi and
237
+ Hanno Biber and
238
+ Evelyn Breiteneder and
239
+ Simon Clematide and
240
+ Marc Kupietz and
241
+ Harald L{"u}ngen and
242
+ Caroline Iliadi},
243
+ publisher = {Leibniz-Institut f{"u}r Deutsche Sprache},
244
+ address = {Mannheim},
245
+ doi = {10.14618/ids-pub-9021},
246
+ url = {http://nbn-resolving.de/urn:nbn:de:bsz:mh39-90215},
247
+ pages = {9 -- 16},
248
+ year = {2019},
249
+ abstract = {Common Crawl is a considerably large, heterogeneous multilingual corpus
250
+ comprised of crawled documents from the internet, surpassing 20TB of data and
251
+ distributed as a set of more than 50 thousand plain text files where each contains
252
+ many documents written in a wide variety of languages. Even though each document has a
253
+ metadata block associated to it, this data lacks any information about the language in
254
+ which each document is written, making it extremely difficult to use Common Crawl for
255
+ monolingual applications. We propose a general, highly parallel, multithreaded
256
+ pipeline to clean and classify Common Crawl by language; we specifically design it so
257
+ that it runs efficiently on medium to low resource infrastructures where I/O speeds
258
+ are the main constraint. We develop the pipeline so that it can be easily reapplied to
259
+ any kind of heterogeneous corpus and so that it can be parameterised to a wide range
260
+ of infrastructures. We also distribute a 6.3TB version of Common Crawl, filtered,
261
+ classified by language, shuffled at line level in order to avoid copyright issues, and
262
+ ready to be used for NLP applications.},
263
+ language = {en}
264
+ }
265
+
266
+
267
+ @article{lovenia2024seacrowd,
268
+ title={SEACrowd: A Multilingual Multimodal Data Hub and Benchmark Suite for Southeast Asian Languages},
269
+ author={Holy Lovenia and Rahmad Mahendra and Salsabil Maulana Akbar and Lester James V. Miranda and Jennifer Santoso and Elyanah Aco and Akhdan Fadhilah and Jonibek Mansurov and Joseph Marvin Imperial and Onno P. Kampman and Joel Ruben Antony Moniz and Muhammad Ravi Shulthan Habibi and Frederikus Hudi and Railey Montalan and Ryan Ignatius and Joanito Agili Lopo and William Nixon and Börje F. Karlsson and James Jaya and Ryandito Diandaru and Yuze Gao and Patrick Amadeus and Bin Wang and Jan Christian Blaise Cruz and Chenxi Whitehouse and Ivan Halim Parmonangan and Maria Khelli and Wenyu Zhang and Lucky Susanto and Reynard Adha Ryanda and Sonny Lazuardi Hermawan and Dan John Velasco and Muhammad Dehan Al Kautsar and Willy Fitra Hendria and Yasmin Moslem and Noah Flynn and Muhammad Farid Adilazuarda and Haochen Li and Johanes Lee and R. Damanhuri and Shuo Sun and Muhammad Reza Qorib and Amirbek Djanibekov and Wei Qi Leong and Quyet V. Do and Niklas Muennighoff and Tanrada Pansuwan and Ilham Firdausi Putra and Yan Xu and Ngee Chia Tai and Ayu Purwarianti and Sebastian Ruder and William Tjhi and Peerat Limkonchotiwat and Alham Fikri Aji and Sedrick Keh and Genta Indra Winata and Ruochen Zhang and Fajri Koto and Zheng-Xin Yong and Samuel Cahyawijaya},
270
+ year={2024},
271
+ eprint={2406.10118},
272
+ journal={arXiv preprint arXiv: 2406.10118}
273
+ }
274
+
275
+ ```