PATTARA TIPAKSORN
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -14,7 +14,7 @@ metrics:
|
|
14 |
# Pathumma Whisper Large V3 (TH)
|
15 |
|
16 |
## Model Description
|
17 |
-
|
18 |
|
19 |
## Quickstart
|
20 |
You can transcribe audio files using the [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline) class with the following code snippet:
|
@@ -53,7 +53,7 @@ WER calculated with newmm tokenizer for Thai word segmentation.
|
|
53 |
**Note:** Other models not target fine-tuned on dialect datasets may be less representative of dialect performance.
|
54 |
|
55 |
## Limitations and Future Work
|
56 |
-
|
57 |
|
58 |
## Acknowledgements
|
59 |
We extend our appreciation to the research teams engaged in the creation of the open speech model, including AIResearch, BiodatLab, Looloo Technology, SCB 10X, and OpenAI. We would like to express our gratitude to Dr. Titipat Achakulwisut of BiodatLab for the evaluation pipeline. We express our gratitude to ThaiSC, or NSTDA Supercomputer Centre, for supplying the LANTA used for model training, fine-tuning, and evaluation.
|
|
|
14 |
# Pathumma Whisper Large V3 (TH)
|
15 |
|
16 |
## Model Description
|
17 |
+
Additional information is needed
|
18 |
|
19 |
## Quickstart
|
20 |
You can transcribe audio files using the [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline) class with the following code snippet:
|
|
|
53 |
**Note:** Other models not target fine-tuned on dialect datasets may be less representative of dialect performance.
|
54 |
|
55 |
## Limitations and Future Work
|
56 |
+
Additional information is needed
|
57 |
|
58 |
## Acknowledgements
|
59 |
We extend our appreciation to the research teams engaged in the creation of the open speech model, including AIResearch, BiodatLab, Looloo Technology, SCB 10X, and OpenAI. We would like to express our gratitude to Dr. Titipat Achakulwisut of BiodatLab for the evaluation pipeline. We express our gratitude to ThaiSC, or NSTDA Supercomputer Centre, for supplying the LANTA used for model training, fine-tuning, and evaluation.
|