Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -26,49 +26,52 @@ task_categories:
|
|
26 |
|
27 |
### Dataset Summary
|
28 |
|
29 |
-
|
30 |
|
31 |
### Supported Tasks and Leaderboards
|
32 |
|
33 |
-
The SEIR dataset
|
34 |
|
35 |
### Languages
|
36 |
|
37 |
-
SEIR-DB
|
38 |
|
39 |
## Dataset Structure
|
40 |
|
41 |
### Data Instances
|
42 |
|
43 |
-
The raw data collection
|
44 |
|
45 |
-
After processing, cleaning and formatting the dataset
|
46 |
|
47 |
### Data Fields
|
48 |
|
49 |
-
- ID
|
50 |
-
- WAV
|
51 |
-
- EMOTION
|
52 |
-
- INTENSITY
|
53 |
-
- LENGTH
|
54 |
|
55 |
### Data Splits
|
56 |
|
57 |
-
The data is
|
58 |
-
- Train [80%]
|
59 |
-
- Validation [10%]
|
60 |
-
- Test [10%]
|
61 |
|
62 |
-
|
|
|
|
|
|
|
|
|
63 |
|
64 |
## Dataset Creation
|
65 |
|
66 |
### Curation Rationale
|
67 |
|
68 |
-
The
|
69 |
|
70 |
### Source Data
|
71 |
|
|
|
|
|
72 |
| Name | Source |
|
73 |
|--------------------|-------------------------------------------------------|
|
74 |
| AESDD (1 hr) | [Link](http://m3c.web.auth.gr/research/aesdd-speech-emotion-recognition/) |
|
@@ -89,29 +92,29 @@ The rational behind the curation of this dataset was centered around data instan
|
|
89 |
|
90 |
#### Annotation process
|
91 |
|
92 |
-
|
93 |
|
94 |
#### Who are the annotators?
|
95 |
|
96 |
-
Please
|
97 |
|
98 |
### Personal and Sensitive Information
|
99 |
|
100 |
-
|
101 |
|
102 |
## Considerations for Using the Data
|
103 |
|
104 |
### Social Impact of Dataset
|
105 |
|
106 |
-
|
107 |
|
108 |
### Discussion of Biases
|
109 |
|
110 |
-
|
111 |
|
112 |
### Other Known Limitations
|
113 |
|
114 |
-
|
115 |
|
116 |
## Additional Information
|
117 |
|
|
|
26 |
|
27 |
### Dataset Summary
|
28 |
|
29 |
+
The SEIR-DB is a comprehensive, multilingual speech emotion intensity recognition dataset containing over 600,000 instances from various sources. It is designed to support tasks related to speech emotion recognition and emotion intensity estimation. The database includes languages such as English, Russian, Mandarin, Greek, Italian, and French.
|
30 |
|
31 |
### Supported Tasks and Leaderboards
|
32 |
|
33 |
+
The SEIR dataset is suitable for speech emotion recognition and speech emotion intensity estimation tasks (a subset of the dataset).
|
34 |
|
35 |
### Languages
|
36 |
|
37 |
+
SEIR-DB encompasses multilingual data, featuring languages such as English, Russian, Mandarin, Greek, Italian, and French.
|
38 |
|
39 |
## Dataset Structure
|
40 |
|
41 |
### Data Instances
|
42 |
|
43 |
+
The raw data collection comprises over 600,000 data instances (375 hours). Users of the database can access the raw audio data, which is stored in subdirectories of the data directory (in their respective datasets).
|
44 |
|
45 |
+
After processing, cleaning, and formatting, the dataset contains approximately 120,000 training instances with an average audio utterance length of 3.8 seconds.
|
46 |
|
47 |
### Data Fields
|
48 |
|
49 |
+
- ID: unique sample identifier
|
50 |
+
- WAV: path to the audio file, located in the data directory
|
51 |
+
- EMOTION: annotated emotion
|
52 |
+
- INTENSITY: annotated intensity (ranging from 1-5), where 1 denotes low intensity, and 5 signifies high intensity; 0 indicates no annotation
|
53 |
+
- LENGTH: duration of the audio utterance
|
54 |
|
55 |
### Data Splits
|
56 |
|
57 |
+
The data is divided into train, test, and validation sets, located in the respective JSON manifest files.
|
|
|
|
|
|
|
58 |
|
59 |
+
- Train: 80%
|
60 |
+
- Validation: 10%
|
61 |
+
- Test: 10%
|
62 |
+
|
63 |
+
For added flexibility, unsplit data is also available in data.csv to allow custom splits.
|
64 |
|
65 |
## Dataset Creation
|
66 |
|
67 |
### Curation Rationale
|
68 |
|
69 |
+
The SEIR-DB was curated to maximize the volume of data instances, addressing a significant limitation in speech emotion recognition (SER) experimentation—the lack of emotion data and the small size of available datasets. This database aims to resolve these issues by providing a large volume of emotion-annotated data that is cleanly formatted for experimentation.
|
70 |
|
71 |
### Source Data
|
72 |
|
73 |
+
The dataset was compiled from various sources:
|
74 |
+
|
75 |
| Name | Source |
|
76 |
|--------------------|-------------------------------------------------------|
|
77 |
| AESDD (1 hr) | [Link](http://m3c.web.auth.gr/research/aesdd-speech-emotion-recognition/) |
|
|
|
92 |
|
93 |
#### Annotation process
|
94 |
|
95 |
+
For details on the annotation process, please refer to the source for each dataset, as they were conducted differently. However, the entire database is human-annotated.
|
96 |
|
97 |
#### Who are the annotators?
|
98 |
|
99 |
+
Please consult the source documentation for information on the annotators.
|
100 |
|
101 |
### Personal and Sensitive Information
|
102 |
|
103 |
+
No attempt was made to remove personal and sensitive information, as consent and recordings were not obtained internally.
|
104 |
|
105 |
## Considerations for Using the Data
|
106 |
|
107 |
### Social Impact of Dataset
|
108 |
|
109 |
+
The SEIR-DB dataset can significantly impact the research and development of speech emotion recognition technologies by providing a large volume of annotated data. These technologies have the potential to enhance various applications, such as mental health monitoring, virtual assistants, customer support, and communication devices for people with disabilities.
|
110 |
|
111 |
### Discussion of Biases
|
112 |
|
113 |
+
During the dataset cleaning process, efforts were made to balance the database concerning the number of samples for each dataset, emotion distribution (with a greater focus on primary emotions and less on secondary emotions), and language distribution. However, biases may still be present.
|
114 |
|
115 |
### Other Known Limitations
|
116 |
|
117 |
+
No specific limitations have been identified at this time.
|
118 |
|
119 |
## Additional Information
|
120 |
|