Update README.md
Browse files
README.md
CHANGED
@@ -8,7 +8,6 @@ base_model:
|
|
8 |
|
9 |
# TASTE: Text-Aligned Speech Tokenization and Embedding for Spoken Language Modeling
|
10 |
|
11 |
-
[[Demo](https://mtkresearch.github.io/
|
12 |
-
|
13 |
-
Spoken Language Models (SLMs), which take speech as both input and output, have gained increasing attention for enabling more natural human-computer interaction. While recent multimodal approaches incorporate speech encoders into Large Language Models (LLMs), they typically generate only text outputs and face significant challenges due to the modality gap between speech and text—particularly the mismatch in token lengths. In this work, we introduce TASTE (Text-Aligned Speech Tokenization and Embedding), a method designed to align speech token lengths with their textual counterparts, thereby addressing the modality gap during the tokenization stage. TASTE eliminates the need for explicit word-level alignments while preserving rich paralinguistic information. We demonstrate that TASTE enables efficient and effective adaptation of text-based LLMs into SLMs using parameter-efficient fine-tuning methods such as LoRA. Empirical results show that TASTE improves generation quality and significantly reduces computational cost during both training and inference, offering a scalable and high-performance solution for spoken language modeling.
|
14 |
|
|
|
|
8 |
|
9 |
# TASTE: Text-Aligned Speech Tokenization and Embedding for Spoken Language Modeling
|
10 |
|
11 |
+
[[Demo](https://mtkresearch.github.io/TASTE-SpokenLM.github.io/)] [[Paper]()] [[Code](https://github.com/mtkresearch/TASTE-SpokenLM)]
|
|
|
|
|
12 |
|
13 |
+
Large Language Models (LLMs) excel in text-based natural language processing tasks but remain constrained by their reliance on textual inputs and outputs. To enable more natural human-LLM interaction, recent progress have focused on deriving a spoken language model (SLM) that can not only listen but also generate speech. To achieve this, a promising direction is to conduct speech-text joint modeling. However, recent SLM still lag behind text LLM due to the modality mismatch. One significant mismatch can be the sequence lengths between speech and text tokens. To address this, we introduce <b>T</b>bext-<b>A</b>ligned <b>S</b>peech <b>T</b>okenization and <b>E</b>mbedding (<b>TASTE</b>), a method that directly addresses the modality gap by aligning speech token with the corresponding text transcription during the tokenization stage. We propose a method that can achieve this through the special aggregation mechanism and with speech reconstruction as the training objective. We conduct extensive experiments and show that TASTE can preserve essential paralinguistic information while dramatically reducing the token sequence length. Furthermore, by leveraging TASTE, we can adapt text-based LLMs into effective SLMs with parameter-efficient fine-tuning techniques such as Low-Rank Adaptation (LoRA). Experimental results on benchmark tasks, including SALMON and StoryCloze, demonstrate that TASTE-based SLMs perform similarly to previous full-finetuning methods. To our knowledge, TASTE is the first end-to-end approach that utilizes a reconstruction objective to automatically learn a text-aligned speech tokenization and embedding suitable for spoken language modeling.
|