Datasets:
language:
- en
- fr
- it
- el
- es
- ru
pretty_name: SEIRDB
size_categories:
- 100K<n<1M
task_categories:
- audio-classification
Speech Emotion Intensity Recognition Database (SEIR-DB)
Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact: [email protected]
Dataset Summary
[More Information Needed]
Supported Tasks and Leaderboards
The SEIR dataset supports speech emotion recognition, and speech emotion intensity (a subset of the dataset) tasks. No leaderboards have been developed yet.
Languages
SEIR-DB contains multilingual data. With languages such as English, Russian, Mandarin, Greek, Italian and French.
Dataset Structure
Data Instances
The raw data collection has over 600,000 data instances (375 hours). After processing, cleaning and formatting the dataset, we are left with roughly 120,000 training instances with an average of 3.8s length audio utterances.
Data Fields
- ID (unique sample identifier)
- WAV (path to audio file, located in the data directory)
- EMOTION (annotated emotion)
- INTENSITY (annotated intensity [1-5], 1 corresponds to low intensity and 5 corresponds to high intensity, 0 corresponds to no annotation)
Data Splits
[More Information Needed]
Dataset Creation
Curation Rationale
[More Information Needed]
Source Data
Initial Data Collection and Normalization
[More Information Needed]
Who are the source language producers?
[More Information Needed]
Annotations
Annotation process
[More Information Needed]
Who are the annotators?
[More Information Needed]
Personal and Sensitive Information
[More Information Needed]
Considerations for Using the Data
Social Impact of Dataset
[More Information Needed]
Discussion of Biases
[More Information Needed]
Other Known Limitations
[More Information Needed]
Additional Information
Dataset Curators
[More Information Needed]
Licensing Information
[More Information Needed]
Citation Information
Aljuhani, R. H., Alshutayri, A., & Alahdal, S. (2021). Arabic speech emotion recognition from Saudi dialect corpus. IEEE Access, 9, 127081-127085.
Basu, S., Chakraborty, J., & Aftabuddin, M. (2017). Emotion recognition from speech using convolutional neural network with recurrent neural network architecture. In ICCES.
Baevski, A., Zhou, H. H., & Collobert, R. (2020). Wav2vec 2.0: A framework for self-supervised learning of speech representations. In NeurIPS.
Busso, C., Bulut, M., Lee, C. C., Kazemzadeh, A., Mower, E., Kim, S., ... & Narayanan, S. (2008). Iemocap: Interactive emotional dyadic motion capture database. In LREC.
Cao, H., Cooper, D.G., Keutmann, M.K., Gur, R.C., Nenkova, A., & Verma, R. (2014). CREMA-D: Crowd-Sourced Emotional Multimodal Actors Dataset. IEEE Transactions on Affective Computing, 5, 377-390.
Chopra, S., Mathur, P., Sawhney, R., & Shah, R. R. (2021). Meta-Learning for Low-Resource Speech Emotion Recognition. In ICASSP.
Costantini, G., Iaderola, I., Paoloni, A., & Todisco, M. (2014). EMOVO Corpus: an Italian Emotional Speech Database. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14) (pp. 3501-3504). European Language Resources Association (ELRA). Reykjavik, Iceland. http://www.lrec-conf.org/proceedings/lrec2014/pdf/591_Paper.pdf
Duville, Mathilde Marie; Alonso-Valerdi, Luz María; Ibarra-Zarate, David I. (2022), “Mexican Emotional Speech Database (MESD)”, Mendeley Data, V5, doi: 10.17632/cy34mh68j9.5
Gournay, Philippe, Lahaie, Olivier, & Lefebvre, Roch. (2018). A Canadian French Emotional Speech Dataset (1.1) [Data set]. ACM Multimedia Systems Conference (MMSys 2018) (MMSys'18), Amsterdam, The Netherlands. Zenodo. https://doi.org/10.5281/zenodo.1478765
Kandali, A., Routray, A., & Basu, T. (2008). Emotion recognition from Assamese speeches using MFCC features and GMM classifier. In TENCON.
Kondratenko, V., Sokolov, A., Karpov, N., Kutuzov, O., Savushkin, N., & Minkin, F. (2022). Large Raw Emotional Dataset with Aggregation Mechanism. arXiv preprint arXiv:2212.12266.
Kwon, S. (2021). MLT-DNet: Speech emotion recognition using 1D dilated CNN based on multi-learning trick approach. Expert Systems with Applications, 167, 114177.
Lee, Y., Lee, J. W., & Kim, S. (2019). Emotion recognition using convolutional neural network and multiple feature fusion. In ICASSP.
Li, Y., Baidoo, C., Cai, T., & Kusi, G. A. (2019). Speech emotion recognition using 1d cnn with no attention. In ICSEC.
Lian, Z., Tao, J., Liu, B., Huang, J., Yang, Z., & Li, R. (2020). Context-Dependent Domain Adversarial Neural Network for Multimodal Emotion Recognition. In Interspeech.
Livingstone, S. R., & Russo, F. A. (2018). The Ryerson audio-visual database of emotional speech and song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English. PLoS ONE, 13(5), e0196391.
Peng, Z., Li, X., Zhu, Z., Unoki, M., Dang, J., & Akagi, M. (2020). Speech emotion recognition using 3d convolutions and attention-based sliding recurrent networks with auditory front-ends. IEEE Access, 8, 16560-16572.
Poria, S., Hazarika, D., Majumder, N., Naik, G., Cambria, E., & Mihalcea, R. (2019). Meld: A multimodal multi-party dataset for emotion recognition in conversations. In ACL.
Schneider, A., Baevski, A., & Collobert, R. (2019). Wav2vec: Unsupervised pre-training for speech recognition. In ICLR.
Schuller, B., Rigoll, G., & Lang, M. (2010). Speech emotion recognition: Features and classification models. In Interspeech.
Sinnott, R. O., Radulescu, A., & Kousidis, S. (2013). Surrey audiovisual expressed emotion (savee) database. In AVEC.
Vryzas, N., Kotsakis, R., Liatsou, A., Dimoulas, C. A., & Kalliris, G. (2018). Speech emotion recognition for performance interaction. Journal of the Audio Engineering Society, 66(6), 457-467.
Vryzas, N., Matsiola, M., Kotsakis, R., Dimoulas, C., & Kalliris, G. (2018, September). Subjective Evaluation of a Speech Emotion Recognition Interaction Framework. In Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion (p. 34). ACM.
Wang, Y., Yang, Y., Liu, Y., Chen, Y., Han, N., & Zhou, J. (2019). Speech emotion recognition using a combination of cnn and rnn. In Interspeech.
Yoon, S., Byun, S., & Jung, K. (2018). Multimodal speech emotion recognition using audio and text. In SLT.
Zhang, R., & Liu, M. (2020). Speech emotion recognition with self-attention. In ACL.
Contributions
[More Information Needed]