GDGiangi commited on
Commit
46a1b7b
·
1 Parent(s): ead7695

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +25 -22
README.md CHANGED
@@ -26,49 +26,52 @@ task_categories:
26
 
27
  ### Dataset Summary
28
 
29
- [More Information Needed]
30
 
31
  ### Supported Tasks and Leaderboards
32
 
33
- The SEIR dataset supports speech emotion recognition, and speech emotion intensity (a subset of the dataset) tasks.
34
 
35
  ### Languages
36
 
37
- SEIR-DB contains multilingual data. With languages such as English, Russian, Mandarin, Greek, Italian and French.
38
 
39
  ## Dataset Structure
40
 
41
  ### Data Instances
42
 
43
- The raw data collection has over 600,000 data instances (375 hours). Users of the database have access to the raw audio data, contained in subdirectories of the data directory (in their respective dataset).
44
 
45
- After processing, cleaning and formatting the dataset, we are left with roughly 120,000 training instances with an average of 3.8s length audio utterances.
46
 
47
  ### Data Fields
48
 
49
- - ID (unique sample identifier)
50
- - WAV (path to audio file, located in the data directory)
51
- - EMOTION (annotated emotion)
52
- - INTENSITY (annotated intensity [1-5], 1 corresponds to low intensity and 5 corresponds to high intensity, 0 corresponds to no annotation)
53
- - LENGTH
54
 
55
  ### Data Splits
56
 
57
- The data is split into train, test, validation sets (located in the corresponding json manifest files).
58
- - Train [80%]
59
- - Validation [10%]
60
- - Test [10%]
61
 
62
- The unsplit data is also available in data.csv, in order to add flexibility with custom splits.
 
 
 
 
63
 
64
  ## Dataset Creation
65
 
66
  ### Curation Rationale
67
 
68
- The rational behind the curation of this dataset was centered around data instance volume maximization. A large limitation with SER experimentation is the lack of emotion data, and the small size of the available datasets. This database aims to solve this problem, and give access to large amounts of emotion annotated data formatted cleanly for experimentation.
69
 
70
  ### Source Data
71
 
 
 
72
  | Name | Source |
73
  |--------------------|-------------------------------------------------------|
74
  | AESDD (1 hr) | [Link](http://m3c.web.auth.gr/research/aesdd-speech-emotion-recognition/) |
@@ -89,29 +92,29 @@ The rational behind the curation of this dataset was centered around data instan
89
 
90
  #### Annotation process
91
 
92
- Please refer to the source for each dataset, as they were conducted differently. However, the database is completely human annotated.
93
 
94
  #### Who are the annotators?
95
 
96
- Please refer to the source documentation.
97
 
98
  ### Personal and Sensitive Information
99
 
100
- The removal of personal and sensitive information was not attempted, as the consent and recordings were not conducted internally.
101
 
102
  ## Considerations for Using the Data
103
 
104
  ### Social Impact of Dataset
105
 
106
- [More Information Needed]
107
 
108
  ### Discussion of Biases
109
 
110
- [More Information Needed]
111
 
112
  ### Other Known Limitations
113
 
114
- [More Information Needed]
115
 
116
  ## Additional Information
117
 
 
26
 
27
  ### Dataset Summary
28
 
29
+ The SEIR-DB is a comprehensive, multilingual speech emotion intensity recognition dataset containing over 600,000 instances from various sources. It is designed to support tasks related to speech emotion recognition and emotion intensity estimation. The database includes languages such as English, Russian, Mandarin, Greek, Italian, and French.
30
 
31
  ### Supported Tasks and Leaderboards
32
 
33
+ The SEIR dataset is suitable for speech emotion recognition and speech emotion intensity estimation tasks (a subset of the dataset).
34
 
35
  ### Languages
36
 
37
+ SEIR-DB encompasses multilingual data, featuring languages such as English, Russian, Mandarin, Greek, Italian, and French.
38
 
39
  ## Dataset Structure
40
 
41
  ### Data Instances
42
 
43
+ The raw data collection comprises over 600,000 data instances (375 hours). Users of the database can access the raw audio data, which is stored in subdirectories of the data directory (in their respective datasets).
44
 
45
+ After processing, cleaning, and formatting, the dataset contains approximately 120,000 training instances with an average audio utterance length of 3.8 seconds.
46
 
47
  ### Data Fields
48
 
49
+ - ID: unique sample identifier
50
+ - WAV: path to the audio file, located in the data directory
51
+ - EMOTION: annotated emotion
52
+ - INTENSITY: annotated intensity (ranging from 1-5), where 1 denotes low intensity, and 5 signifies high intensity; 0 indicates no annotation
53
+ - LENGTH: duration of the audio utterance
54
 
55
  ### Data Splits
56
 
57
+ The data is divided into train, test, and validation sets, located in the respective JSON manifest files.
 
 
 
58
 
59
+ - Train: 80%
60
+ - Validation: 10%
61
+ - Test: 10%
62
+
63
+ For added flexibility, unsplit data is also available in data.csv to allow custom splits.
64
 
65
  ## Dataset Creation
66
 
67
  ### Curation Rationale
68
 
69
+ The SEIR-DB was curated to maximize the volume of data instances, addressing a significant limitation in speech emotion recognition (SER) experimentationthe lack of emotion data and the small size of available datasets. This database aims to resolve these issues by providing a large volume of emotion-annotated data that is cleanly formatted for experimentation.
70
 
71
  ### Source Data
72
 
73
+ The dataset was compiled from various sources:
74
+
75
  | Name | Source |
76
  |--------------------|-------------------------------------------------------|
77
  | AESDD (1 hr) | [Link](http://m3c.web.auth.gr/research/aesdd-speech-emotion-recognition/) |
 
92
 
93
  #### Annotation process
94
 
95
+ For details on the annotation process, please refer to the source for each dataset, as they were conducted differently. However, the entire database is human-annotated.
96
 
97
  #### Who are the annotators?
98
 
99
+ Please consult the source documentation for information on the annotators.
100
 
101
  ### Personal and Sensitive Information
102
 
103
+ No attempt was made to remove personal and sensitive information, as consent and recordings were not obtained internally.
104
 
105
  ## Considerations for Using the Data
106
 
107
  ### Social Impact of Dataset
108
 
109
+ The SEIR-DB dataset can significantly impact the research and development of speech emotion recognition technologies by providing a large volume of annotated data. These technologies have the potential to enhance various applications, such as mental health monitoring, virtual assistants, customer support, and communication devices for people with disabilities.
110
 
111
  ### Discussion of Biases
112
 
113
+ During the dataset cleaning process, efforts were made to balance the database concerning the number of samples for each dataset, emotion distribution (with a greater focus on primary emotions and less on secondary emotions), and language distribution. However, biases may still be present.
114
 
115
  ### Other Known Limitations
116
 
117
+ No specific limitations have been identified at this time.
118
 
119
  ## Additional Information
120