ziyue commited on
Commit
59be7ad
·
verified ·
1 Parent(s): 22c5400

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -2
README.md CHANGED
@@ -1,8 +1,9 @@
1
  # Open-ASQA-Speech for R1-A
2
 
3
  Now support for:
4
- - LibriTTS
5
  - MOSEI
 
 
6
 
7
  ## Dataset Usage
8
 
@@ -30,6 +31,9 @@ The default configuration is "all".
30
 
31
  ``` python
32
  # Example code
33
- load_dataset("blabble-io/libritts", "clean", split="train.clean.100")
34
  ```
35
 
 
 
 
 
1
  # Open-ASQA-Speech for R1-A
2
 
3
  Now support for:
 
4
  - MOSEI
5
+ - LibriTTS
6
+ - IMOCAP
7
 
8
  ## Dataset Usage
9
 
 
31
 
32
  ``` python
33
  # Example code
34
+ load_dataset("{your path}/libritts", "clean", split="train.clean.100")
35
  ```
36
 
37
+ ### IMOCAP
38
+ The Interactive Emotional Dyadic Motion Capture (IEMOCAP) database is an acted, multimodal and multispeaker database, recently collected at SAIL lab at USC. It contains approximately 12 hours of audiovisual data, including video, speech, motion capture of face, text transcriptions. IEMOCAP database is annotated by multiple annotators into categorical labels, such as anger, happiness, sadness, neutrality, as well as dimensional labels such as valence, activation and dominance.
39
+