Update README.md
Browse files
README.md
CHANGED
@@ -1,4 +1,5 @@
|
|
1 |
---
|
|
|
2 |
license: cc-by-2.0
|
3 |
task_categories:
|
4 |
- translation
|
@@ -9,67 +10,65 @@ size_categories:
|
|
9 |
- 10K<n<100K
|
10 |
---
|
11 |
|
|
|
12 |
# Dataset Card for Dataset Name
|
13 |
|
14 |
## Dataset Description
|
15 |
|
16 |
-
- **
|
17 |
-
- **Repository:**
|
18 |
- **Data Format:** TSV
|
19 |
-
- **
|
20 |
-
- **Point of Contact:**
|
21 |
-
### Dataset Summary
|
22 |
|
23 |
-
|
|
|
24 |
|
25 |
|
26 |
|
27 |
### Data Instances
|
28 |
|
29 |
-
|
|
|
|
|
30 |
|
31 |
### Data Fields
|
32 |
|
33 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
34 |
|
35 |
### Data Splits
|
36 |
|
37 |
-
- **Training Data:**
|
38 |
-
- **Validation Data:**
|
39 |
-
- **Test Data:**
|
40 |
-
[More Information Needed]
|
41 |
|
42 |
## Data Preprocessing
|
43 |
|
44 |
-
- **Data
|
45 |
-
- **Data Transformation:** [Details on how the data was transformed, if applicable]
|
46 |
-
- **Data Splitting:** [Information about how the data was split into training, validation, and test sets]
|
47 |
|
48 |
## Data Collection
|
49 |
|
50 |
-
- **Data Collection Process:**
|
51 |
-
- **Data Sources:** [List of sources used to collect the data]
|
52 |
-
- **Data Annotations:** [Details about any annotations or labels in the dataset]
|
53 |
-
|
54 |
-
## Dataset Creation
|
55 |
-
|
56 |
-
### Curation Rationale
|
57 |
-
|
58 |
-
[More Information Needed]
|
59 |
|
60 |
-
|
|
|
|
|
|
|
61 |
|
62 |
-
#### Initial Data Collection and Normalization
|
63 |
|
64 |
|
65 |
-
[More Information Needed]
|
66 |
|
67 |
-
|
68 |
-
|
69 |
-
#### Annotation process
|
70 |
|
71 |
-
|
72 |
|
73 |
-
#### Who are the annotators?
|
74 |
|
75 |
|
|
|
1 |
---
|
2 |
+
---
|
3 |
license: cc-by-2.0
|
4 |
task_categories:
|
5 |
- translation
|
|
|
10 |
- 10K<n<100K
|
11 |
---
|
12 |
|
13 |
+
|
14 |
# Dataset Card for Dataset Name
|
15 |
|
16 |
## Dataset Description
|
17 |
|
18 |
+
- **Repository:**[link](https://github.com/Digital-Umuganda/twb_nllb_project_tourism_education) to the GitHub repository containing the code for training the model on this data, and the code for the collection of the monolingual data.
|
|
|
19 |
- **Data Format:** TSV
|
20 |
+
- **Model:** huggingface [model link](mbazaNLP/Nllb_finetuned_education_en_kin).
|
|
|
|
|
21 |
|
22 |
+
|
23 |
+
### Dataset Summary
|
24 |
|
25 |
|
26 |
|
27 |
### Data Instances
|
28 |
|
29 |
+
```
|
30 |
+
118347 103384 And their ideas was that the teachers just didn't care and had no time for them. Kandi igitekerezo cyabo nuko abarimu batabitayeho gusa kandi ntibabone umwanya. 2023-06-25 09:40:28 223 1 3 education coursera 72-93
|
31 |
+
```
|
32 |
|
33 |
### Data Fields
|
34 |
|
35 |
+
- id
|
36 |
+
- source_id
|
37 |
+
- source
|
38 |
+
- phrase
|
39 |
+
- timestamp
|
40 |
+
- user_id
|
41 |
+
- validation_state
|
42 |
+
- validation_score
|
43 |
+
- domain
|
44 |
+
- source_files
|
45 |
+
- str_ranges
|
46 |
|
47 |
### Data Splits
|
48 |
|
49 |
+
- **Training Data:** 58251
|
50 |
+
- **Validation Data:** 2456
|
51 |
+
- **Test Data:** 1060
|
|
|
52 |
|
53 |
## Data Preprocessing
|
54 |
|
55 |
+
- **Data Splitting:** To create a test set; all data sources are equally represented in terms of the number of sentences contributed to the test dataset. In terms of sentence length, the test set distribution is similar to the sentence length distribution of the whole dataset. After picking the test set, from the remaining data the train and validation data are split using sklearn's [train_test_split](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html).
|
|
|
|
|
56 |
|
57 |
## Data Collection
|
58 |
|
59 |
+
- **Data Collection Process:** The monolingual source sentences were obtained through web-scraping of several websites containing English sentences.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
60 |
|
61 |
+
- **Data Sources:**
|
62 |
+
- Coursera
|
63 |
+
- Atingi
|
64 |
+
- Wikipedia
|
65 |
|
|
|
66 |
|
67 |
|
|
|
68 |
|
69 |
+
## Dataset Creation
|
|
|
|
|
70 |
|
71 |
+
After collecting the monolingual dataset, human translators were employed to produce translations for the collected sentences. To ensure quality, each sentence was translated more than once, and each generated translation was assigned **validation_score** that was used to pick the best translation. The test dataset was further revised to remove or correct sentences with faulty translations.
|
72 |
|
|
|
73 |
|
74 |
|