Add pipeline tag and library name

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +86 -3
README.md CHANGED
@@ -1,9 +1,11 @@
1
  ---
2
- license: apache-2.0
3
- language:
4
- - en
5
  base_model:
6
  - meta-llama/Llama-3.2-1B
 
 
 
 
 
7
  ---
8
 
9
  # TASTE: Text-Aligned Speech Tokenization and Embedding for Spoken Language Modeling
@@ -13,3 +15,84 @@ base_model:
13
  <b>Liang-Hsuan Tseng*, Yi-Chang Chen*, Kuan-Yi Lee, Da-Shan Shiu, Hung-yi Lee</b><br/>*Equal contribution
14
 
15
  Large Language Models (LLMs) excel in text-based natural language processing tasks but remain constrained by their reliance on textual inputs and outputs. To enable more natural human-LLM interaction, recent progress have focused on deriving a spoken language model (SLM) that can not only listen but also generate speech. To achieve this, a promising direction is to conduct speech-text joint modeling. However, recent SLM still lag behind text LLM due to the modality mismatch. One significant mismatch can be the sequence lengths between speech and text tokens. To address this, we introduce <b>T</b>bext-<b>A</b>ligned <b>S</b>peech <b>T</b>okenization and <b>E</b>mbedding (<b>TASTE</b>), a method that directly addresses the modality gap by aligning speech token with the corresponding text transcription during the tokenization stage. We propose a method that can achieve this through the special aggregation mechanism and with speech reconstruction as the training objective. We conduct extensive experiments and show that TASTE can preserve essential paralinguistic information while dramatically reducing the token sequence length. Furthermore, by leveraging TASTE, we can adapt text-based LLMs into effective SLMs with parameter-efficient fine-tuning techniques such as Low-Rank Adaptation (LoRA). Experimental results on benchmark tasks, including SALMON and StoryCloze, demonstrate that TASTE-based SLMs perform similarly to previous full-finetuning methods. To our knowledge, TASTE is the first end-to-end approach that utilizes a reconstruction objective to automatically learn a text-aligned speech tokenization and embedding suitable for spoken language modeling.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
 
 
2
  base_model:
3
  - meta-llama/Llama-3.2-1B
4
+ language:
5
+ - en
6
+ license: apache-2.0
7
+ pipeline_tag: audio-text-to-text
8
+ library_name: transformers
9
  ---
10
 
11
  # TASTE: Text-Aligned Speech Tokenization and Embedding for Spoken Language Modeling
 
15
  <b>Liang-Hsuan Tseng*, Yi-Chang Chen*, Kuan-Yi Lee, Da-Shan Shiu, Hung-yi Lee</b><br/>*Equal contribution
16
 
17
  Large Language Models (LLMs) excel in text-based natural language processing tasks but remain constrained by their reliance on textual inputs and outputs. To enable more natural human-LLM interaction, recent progress have focused on deriving a spoken language model (SLM) that can not only listen but also generate speech. To achieve this, a promising direction is to conduct speech-text joint modeling. However, recent SLM still lag behind text LLM due to the modality mismatch. One significant mismatch can be the sequence lengths between speech and text tokens. To address this, we introduce <b>T</b>bext-<b>A</b>ligned <b>S</b>peech <b>T</b>okenization and <b>E</b>mbedding (<b>TASTE</b>), a method that directly addresses the modality gap by aligning speech token with the corresponding text transcription during the tokenization stage. We propose a method that can achieve this through the special aggregation mechanism and with speech reconstruction as the training objective. We conduct extensive experiments and show that TASTE can preserve essential paralinguistic information while dramatically reducing the token sequence length. Furthermore, by leveraging TASTE, we can adapt text-based LLMs into effective SLMs with parameter-efficient fine-tuning techniques such as Low-Rank Adaptation (LoRA). Experimental results on benchmark tasks, including SALMON and StoryCloze, demonstrate that TASTE-based SLMs perform similarly to previous full-finetuning methods. To our knowledge, TASTE is the first end-to-end approach that utilizes a reconstruction objective to automatically learn a text-aligned speech tokenization and embedding suitable for spoken language modeling.
18
+
19
+
20
+ ## Quick Start
21
+
22
+ Install the `taste_speech` package
23
+ ```
24
+ git clone https://github.com/mtkresearch/TASTE-SpokenLM.git
25
+ cd TASTE-SpokenLM
26
+ pip install .
27
+ ```
28
+
29
+ Install some dependencies,
30
+ ```
31
+ pip install -q torch torchaudio transformers
32
+ pip install -q einx==0.3.0 HyperPyYAML==1.2.2 openai-whisper==20231117 onnxruntime-gpu==1.16.0 conformer==0.3.2 lightning==2.2.4
33
+ ```
34
+
35
+ ### Inference Completion
36
+
37
+ ```python
38
+ from datasets import Dataset
39
+ import torchaudio
40
+
41
+ from taste_speech import TasteConfig, TasteForCausalLM, TasteProcessor
42
+
43
+ device = 0
44
+ model_id = 'MediaTek-Research/Llama-1B-TASTE-V0'
45
+ attn_implementation = 'eager'
46
+
47
+ model = TasteForCausalLM.from_pretrained(model_id, attn_implementation=attn_implementation)
48
+
49
+ model = model.to(device)
50
+ model.eval()
51
+
52
+ processor = TasteProcessor.from_pretrained(model_id)
53
+ generator = processor.get_generator(model_id, device=device)
54
+
55
+ generate_kwargs = dict(
56
+ llm_tokenizer=processor.llm_tokenizer,
57
+ asr_tokenizer=processor.audio_tokenizer,
58
+ extra_words=8,
59
+ text_top_p=0.3,
60
+ taste_top_p=0.0,
61
+ text_temperature=0.5,
62
+ repetition_penalty=1.1,
63
+ )
64
+
65
+ conditional_audio_paths = ['/path/to/audio.wav']
66
+ output_audio_paths = ['/path/to/generated_audio.wav']
67
+ sampling_rate = 16000
68
+
69
+ data = [
70
+ processor(
71
+ audio_path,
72
+ sampling_rate,
73
+ ref_audio_list=[audio_path]
74
+ )
75
+ for audio_path in conditional_audio_paths
76
+ ]
77
+ dataset = Dataset.from_list(data)
78
+
79
+ for inputs, output_fpath in zip(data, output_audio_paths):
80
+ inputs = {k: inputs[k].to(device) for k in inputs.keys()}
81
+ output = model.inference_completion(
82
+ **inputs,
83
+ conditional_mode='audio',
84
+ **generate_kwargs,
85
+ )
86
+ tts_speech, tts_sr = generator.inference(
87
+ speech_token_ids=output['speech_token_ids'],
88
+ speech_token_lengths=output['speech_token_lengths'],
89
+ flow_embedding=inputs['speaker_embeds']
90
+ )
91
+ torchaudio.save(output_fpath, tts_speech, tts_sr)
92
+ ```
93
+
94
+ ### Run Inference
95
+
96
+ ```
97
+ python scripts/generate_audio.py --conditional_compl
98
+ ```