Datasets:
File size: 9,898 Bytes
155d66b 8c618e5 2d4945b a48e5cd 2d46194 145dea1 2d46194 3f2fe2f cab1161 3f2fe2f 155d66b 09a5b3e 155d66b 46a1b7b 155d66b 2d46194 e60be0e 2d46194 155d66b 2d46194 155d66b 46a1b7b a6b93ec 155d66b 46a1b7b 892a284 46a1b7b 155d66b 2d46194 155d66b 46a1b7b 3777634 2d46194 46a1b7b 2d46194 155d66b 46a1b7b 155d66b 515545e 155d66b 46a1b7b 155d66b 46a1b7b 155d66b 46a1b7b 155d66b 46a1b7b 155d66b 46a1b7b 155d66b 46a1b7b 155d66b 2d46194 155d66b 2d46194 155d66b 8b1b4c6 155d66b 2d46194 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 |
---
language:
- en
- fr
- it
- el
- es
- ru
pretty_name: SEIRDB
size_categories:
- 100K<n<1M
task_categories:
- audio-classification
tags:
- audio
- SER
configs:
- config_name: main_data
data_files: "data.csv"
- config_name: benchmark
data_files:
- split: train
path: train.csv
- split: validation
path: valid.csv
- split: test
path: test.csv
---
# Speech Emotion Intensity Recognition Database (SEIR-DB)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact: [email protected]**
### Dataset Summary
The SEIR-DB is a comprehensive, multilingual speech emotion intensity recognition dataset containing over 600,000 instances from various sources. It is designed to support tasks related to speech emotion recognition and emotion intensity estimation. The database includes languages such as English, Russian, Mandarin, Greek, Italian, and French.
### Supported Tasks and Leaderboards
The SEIR-DB is suitable for:
- **Speech Emotion Recognition** (classification of discrete emotional states)
- **Speech Emotion Intensity Estimation** (a subset of this dataset, where intensity is rated from 1–5)
#### SPEAR (8 emotions – 375 hours)
[SPEAR (Speech Emotion Analysis and Recognition System)](mailto:[email protected]) is an **ensemble model** and serves as the SER **benchmark** for this dataset. Below is a comparison of its performance against the best fine-tuned pre-trained model (WavLM Large):
| WavLM Large Test Accuracy | SPEAR Test Accuracy | Improvement |
|---------------------------|---------------------|-------------|
| 87.8% | 90.8% | +3.0% |
More detailed metrics for **SPEAR**:
| Train Accuracy (%) | Validation Accuracy (%) | Test Accuracy (%) |
|--------------------|-------------------------|-------------------|
| 99.8% | 90.4% | 90.8% |
---
## Languages
SEIR-DB encompasses multilingual data, featuring languages such as English, Russian, Mandarin, Greek, Italian, and French.
## Dataset Structure
### Data Instances
The raw data collection comprises over 600,000 data instances (375 hours). Users of the database can access the raw audio data, which is stored in subdirectories of the data directory (in their respective datasets).
After processing, cleaning, and formatting, the dataset contains approximately 120,000 training instances with an average audio utterance length of 3.8 seconds.
### Data Fields
- **ID**: unique sample identifier
- **WAV**: path to the audio file, located in the data directory
- **EMOTION**: annotated emotion
- **INTENSITY**: annotated intensity (ranging from 1-5), where 1 denotes low intensity, and 5 signifies high intensity; 0 indicates no annotation
- **LENGTH**: duration of the audio utterance
### Data Splits
The data is divided into train, test, and validation sets, located in the respective JSON manifest files.
- **Train**: 80%
- **Validation**: 10%
- **Test**: 10%
For added flexibility, unsplit data is also available in `data.csv` to allow custom splits.
## Dataset Creation
### Curation Rationale
The SEIR-DB was curated to maximize the volume of data instances, addressing a significant limitation in speech emotion recognition (SER) experimentation—the lack of emotion data and the small size of available datasets. This database aims to resolve these issues by providing a large volume of emotion-annotated data that is cleanly formatted for experimentation.
### Source Data
The dataset was compiled from various sources.
### Annotations
#### Annotation process
For details on the annotation process, please refer to the source for each dataset, as they were conducted differently. However, the entire database is human-annotated.
#### Who are the annotators?
Please consult the source documentation for information on the annotators.
### Personal and Sensitive Information
No attempt was made to remove personal and sensitive information, as consent and recordings were not obtained internally.
## Considerations for Using the Data
### Social Impact of Dataset
The SEIR-DB dataset can significantly impact the research and development of speech emotion recognition technologies by providing a large volume of annotated data. These technologies have the potential to enhance various applications, such as mental health monitoring, virtual assistants, customer support, and communication devices for people with disabilities.
### Discussion of Biases
During the dataset cleaning process, efforts were made to balance the database concerning the number of samples for each dataset, emotion distribution (with a greater focus on primary emotions and less on secondary emotions), and language distribution. However, biases may still be present.
### Other Known Limitations
No specific limitations have been identified at this time.
## Additional Information
### Dataset Curators
Gabriel Giangi - Concordia University - Montreal, QC Canada - [[email protected]](mailto:[email protected])
### Licensing Information
This dataset can be used for research and academic purposes. For commercial purposes, please contact [[email protected]](mailto:[email protected]).
### Citation Information
Aljuhani, R. H., Alshutayri, A., & Alahdal, S. (2021). Arabic speech emotion recognition from Saudi dialect corpus. IEEE Access, 9, 127081-127085.
Basu, S., Chakraborty, J., & Aftabuddin, M. (2017). Emotion recognition from speech using convolutional neural network with recurrent neural network architecture. In ICCES.
Baevski, A., Zhou, H. H., & Collobert, R. (2020). Wav2vec 2.0: A framework for self-supervised learning of speech representations. In NeurIPS.
Busso, C., Bulut, M., Lee, C. C., Kazemzadeh, A., Mower, E., Kim, S., ... & Narayanan, S. (2008). Iemocap: Interactive emotional dyadic motion capture database. In LREC.
Cao, H., Cooper, D.G., Keutmann, M.K., Gur, R.C., Nenkova, A., & Verma, R. (2014). CREMA-D: Crowd-Sourced Emotional Multimodal Actors Dataset. IEEE Transactions on Affective Computing, 5, 377-390.
Chopra, S., Mathur, P., Sawhney, R., & Shah, R. R. (2021). Meta-Learning for Low-Resource Speech Emotion Recognition. In ICASSP.
Costantini, G., Iaderola, I., Paoloni, A., & Todisco, M. (2014). EMOVO Corpus: an Italian Emotional Speech Database. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14) (pp. 3501-3504). European Language Resources Association (ELRA). Reykjavik, Iceland. http://www.lrec-conf.org/proceedings/lrec2014/pdf/591_Paper.pdf
Duville, Mathilde Marie; Alonso-Valerdi, Luz María; Ibarra-Zarate, David I. (2022), “Mexican Emotional Speech Database (MESD)”, Mendeley Data, V5, doi: 10.17632/cy34mh68j9.5
Gournay, Philippe, Lahaie, Olivier, & Lefebvre, Roch. (2018). A Canadian French Emotional Speech Dataset (1.1) [Data set]. ACM Multimedia Systems Conference (MMSys 2018) (MMSys'18), Amsterdam, The Netherlands. Zenodo. https://doi.org/10.5281/zenodo.1478765
Kandali, A., Routray, A., & Basu, T. (2008). Emotion recognition from Assamese speeches using MFCC features and GMM classifier. In TENCON.
Kondratenko, V., Sokolov, A., Karpov, N., Kutuzov, O., Savushkin, N., & Minkin, F. (2022). Large Raw Emotional Dataset with Aggregation Mechanism. arXiv preprint arXiv:2212.12266.
Kwon, S. (2021). MLT-DNet: Speech emotion recognition using 1D dilated CNN based on multi-learning trick approach. Expert Systems with Applications, 167, 114177.
Lee, Y., Lee, J. W., & Kim, S. (2019). Emotion recognition using convolutional neural network and multiple feature fusion. In ICASSP.
Li, Y., Baidoo, C., Cai, T., & Kusi, G. A. (2019). Speech emotion recognition using 1d cnn with no attention. In ICSEC.
Lian, Z., Tao, J., Liu, B., Huang, J., Yang, Z., & Li, R. (2020). Context-Dependent Domain Adversarial Neural Network for Multimodal Emotion Recognition. In Interspeech.
Livingstone, S. R., & Russo, F. A. (2018). The Ryerson audio-visual database of emotional speech and song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English. PLoS ONE, 13(5), e0196391.
Peng, Z., Li, X., Zhu, Z., Unoki, M., Dang, J., & Akagi, M. (2020). Speech emotion recognition using 3d convolutions and attention-based sliding recurrent networks with auditory front-ends. IEEE Access, 8, 16560-16572.
Poria, S., Hazarika, D., Majumder, N., Naik, G., Cambria, E., & Mihalcea, R. (2019). Meld: A multimodal multi-party dataset for emotion recognition in conversations. In ACL.
Schneider, A., Baevski, A., & Collobert, R. (2019). Wav2vec: Unsupervised pre-training for speech recognition. In ICLR.
Schuller, B., Rigoll, G., & Lang, M. (2010). Speech emotion recognition: Features and classification models. In Interspeech.
Sinnott, R. O., Radulescu, A., & Kousidis, S. (2013). Surrey audiovisual expressed emotion (savee) database. In AVEC.
Vryzas, N., Kotsakis, R., Liatsou, A., Dimoulas, C. A., & Kalliris, G. (2018). Speech emotion recognition for performance interaction. Journal of the Audio Engineering Society, 66(6), 457-467.
Vryzas, N., Matsiola, M., Kotsakis, R., Dimoulas, C., & Kalliris, G. (2018, September). Subjective Evaluation of a Speech Emotion Recognition Interaction Framework. In Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion (p. 34). ACM.
Wang, Y., Yang, Y., Liu, Y., Chen, Y., Han, N., & Zhou, J. (2019). Speech emotion recognition using a combination of cnn and rnn. In Interspeech.
Yoon, S., Byun, S., & Jung, K. (2018). Multimodal speech emotion recognition using audio and text. In SLT.
Zhang, R., & Liu, M. (2020). Speech emotion recognition with self-attention. In ACL.
### Contributions
Gabriel Giangi - Concordia University - Montreal, QC Canada - [[email protected]](mailto:[email protected]) |