OlameMend commited on
Commit
39a1f6d
·
verified ·
1 Parent(s): 19c5c5a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +201 -3
README.md CHANGED
@@ -1,3 +1,201 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ task_categories:
4
+ - text-to-speech
5
+ - automatic-speech-recognition
6
+ language:
7
+ - af
8
+ - bag
9
+ - bas
10
+ - bax
11
+ - bbj
12
+ - bdh
13
+ - bfd
14
+ - bkh
15
+ - bkm
16
+ - bqz
17
+ - byv
18
+ - dua
19
+ - eto
20
+ - etu
21
+ - ewe
22
+ - ewo
23
+ - fmp
24
+ - fub
25
+ - fuc
26
+ - gya
27
+ - ha
28
+ - ibo
29
+ - isu
30
+ - ker
31
+ - kqs
32
+ - ksf
33
+ - lin
34
+ - lns
35
+ - lem
36
+ - mcp
37
+ - mg
38
+ - mua
39
+ - nda
40
+ - nhh
41
+ - nla
42
+ - nso
43
+ - pcm
44
+ - swa
45
+ - tvu
46
+ - twi
47
+ - vut
48
+ - wol
49
+ - xho
50
+ - yat
51
+ - yav
52
+ - ybb
53
+ - yor
54
+ - zul
55
+
56
+
57
+
58
+ # SOREVA
59
+
60
+ ## Dataset Description
61
+
62
+
63
+ - **Total amount of disk used:** ca. 403.3 MB
64
+
65
+
66
+ **SOREVA** (*Small Out-of-domain Resource for Various African languages*) is a multilingual speech dataset designed for the **evaluation** of text-to-speech (TTS) and speech representation models in **low-resource African languages**.
67
+ Comming from Goethe Institut intiative of collecting 150 samples(Audio and transcription) for about 49 africain languages and dialectes
68
+ This dataset specifically targets **out-of-domain generalization**, addressing the lack of evaluation sets for languages typically trained on narrow-domain corpora such as religious texts.
69
+
70
+
71
+ SOREVA includes languages from across Sub-Saharan Africa, including:
72
+
73
+ - **Standard languages**:
74
+ `Afrikaans`, `Hausa`, `Yoruba`, `Igbo`, `Lingala`, `Kiswahili`, `isiXhosa`, `isiZulu`, `Wolof`
75
+
76
+ - **Dialectal & minor languages**:
77
+ `Bafia`, `Bafut`, `Baka`, `Bakoko`, `Bamun`, `Basaa`, `Duala`, `Ejagham`, `Eton`, `Ewondo`, `Fe`,
78
+ `Fulfulde`, `Gbaya`, `Ghamála`, `Isu`, `Kera`, `Kom`, `Kwasio`, `Lamso`, `Maka`, `Malagasy`, `Medumba`,
79
+ `Mka`, `Mundang`, `Nda`, `Ngiemboon`, `Ngombala`, `Nomaande`, `Nugunu`, `Pidgin`, `Pulaar`,
80
+ `Sepedi`, `Tuki`, `Tunen`, `Twi`, `Vute`, `Yambeta`, `Yangben`, `Yemba`, `Éwé`
81
+
82
+
83
+
84
+ ## How to use & Supported Tasks
85
+
86
+ ### How to use
87
+
88
+ The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
89
+
90
+ For example, to download the Hindi config, simply specify the corresponding language config name (i.e., "hi_in" for Hindi):
91
+ ```python
92
+ from datasets import load_dataset
93
+ Load a specific language (e.g., 'ha_ng' for Hausa Nigeria)
94
+ dataset = load_dataset("OlameMend/soreva", "ha_ng", split="test")
95
+ ```
96
+
97
+ Load all languages together
98
+ ```python
99
+ from datasets import load_dataset
100
+ dataset = load_dataset("OlameMend/soreva", "all", split="test")
101
+
102
+ ```
103
+
104
+ ### 🎧 Getting Audio and Transcription
105
+
106
+ You can easily access and listen to audio samples along with their transcriptions:
107
+
108
+ ```python
109
+ from datasets import load_dataset
110
+ from IPython.display import Audio
111
+
112
+ # Load the dataset for a specific language, e.g., "ha"
113
+ soreva = load_dataset("OlameMend/soreva", "ha_ng", split='test' , trust_remote_code=True)
114
+
115
+ # Access the first example's audio array and sampling rate
116
+ audio_array = soreva[0]['audio']['array'] # audio data as numpy array
117
+ sr = soreva[0]['audio']['sampling_rate'] # sampling rate
118
+
119
+ # Print the corresponding transcription
120
+ print(soreva[0]['transcription'])
121
+
122
+ # Play the audio in a Jupyter notebook
123
+ Audio(audio_array, rate=sr)
124
+ ```
125
+ thele
126
+ ## Dataset Structure
127
+
128
+ We show detailed information the example configurations `ewo_cm` of the dataset.
129
+ All other configurations have the same structure.
130
+
131
+ ### Data Instances
132
+
133
+ **ewo_cm**
134
+ - Size of downloaded dataset files: 1.47 GB
135
+ - Size of the generated dataset: 1 MB
136
+ - Total amount of disk used: 1.47 GB
137
+
138
+ An example of a data instance of the config `af_za` looks as follows:
139
+
140
+ ```
141
+ {'path': '/home/mendo/.cache/huggingface/datasets/downloads/extracted/3f773a931d09d3c4f9e9a8643e93d191a30d36df95ae32eedbafb6a634135f98/cm_ewo_001.wav',
142
+ 'audio': {'path': 'cm_ewo/cm_ewo_001.wav',
143
+ 'array': array([-0.00518799, -0.00698853, -0.00814819, ..., -0.02404785,
144
+ -0.02084351, -0.02062988]),
145
+ 'sampling_rate': 16000},
146
+ 'transcription': 'mbembe kidi',
147
+ 'raw_transcription': 'mbəmbə kídí',
148
+ 'gender': 0,
149
+ 'lang_id': 15,
150
+ 'language': 'Ewondo'}
151
+ ```
152
+
153
+ ### Data Fields
154
+
155
+ The data fields are the same among all splits.
156
+ - **path** (`str`): Path to the audio file.
157
+ - **audio** (`dict`): Audio object including:
158
+ - **array** (`np.array`): Loaded audio waveform as float values.
159
+ - **sampling_rate** (`int`): Sampling rate of the audio.
160
+ - **path** (`str`): Relative path inside the archive or dataset.
161
+ - **transcription** (`str`): Normalized transcription of the audio file.
162
+ - **raw_transcription** (`str`): Original non-normalized transcription of the audio file.
163
+ - **gender** (`int`): Class ID of gender (0 = MALE, 1 = FEMALE, 2 = OTHER).
164
+ - **lang_id** (`int`): Class ID of the language.
165
+ - **language** (`str`): Full language name corresponding to the lang_id.
166
+
167
+
168
+ ### Data Splits
169
+
170
+ Currently, as this is the first initiative, we only provide a **test** split containing approximately **150** audio samples.
171
+
172
+ Other splits such as **train** and **validation** are not included at this stage but are expected to be added through community contributions and continuous dataset development.
173
+
174
+ This initial test split allows evaluation and benchmarking, while the dataset will evolve to include more comprehensive splits in the future.
175
+ ## Dataset Creation
176
+
177
+ The data were collected by the Goethe-Institut and consist of 150 audio samples with corresponding transcriptions across 48 African languages and dialects.
178
+
179
+ ## Considerations for Using the Data
180
+
181
+ ### Social Impact of Dataset
182
+
183
+ This dataset is meant to encourage the development of speech technology in a lot more languages of the world. One of the goal is to give equal access to technologies like speech recognition or speech translation to everyone, meaning better dubbing or better access to content from the internet (like podcasts, streaming or videos).
184
+
185
+ ### Discussion of Biases
186
+
187
+
188
+ ### Other Known Limitations
189
+
190
+
191
+ ## Additional Information
192
+
193
+ All datasets are licensed under the [Creative Commons license (CC-BY)](https://creativecommons.org/licenses/).
194
+
195
+ ### Citation Information
196
+
197
+
198
+
199
+ ### Contributions
200
+
201
+ Thanks to [@LeoMendo](https://github.com/MendoLeo) for adding this dataset.