• LLaMa2 - 7B Chat models, extend vocab size to 44800 for Vietnamese understanding.

  • Continual Pre-Train with 2B Vietnames Tokens aligned from VnNews Corpus, 10K vnthuquan books, wikipedia_vi

  • Fine-Tuning with infCapital/viet-llama2-ft-tiny dataset, the combination of vaious dataset then translated into Vietnamese using OpenAI GPT-3

  • For more information: email me at [email protected] | http://fb.com/hungbui2013

Downloads last month
494
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for infCapital/viet-llama2-ft

Quantizations
1 model

Datasets used to train infCapital/viet-llama2-ft