|
--- |
|
license: apache-2.0 |
|
pipeline_tag: text-generation |
|
language: |
|
- ta |
|
tags: |
|
- pretrained |
|
inference: |
|
parameters: |
|
temperature: 0.7 |
|
datasets: |
|
- Hemanth-thunder/tamil-madlad-400 |
|
--- |
|
# Model Card for Tamil-Mistral-7B-v0.1 |
|
|
|
The Tamil-Mistral-7B-v0.1 Large Language Model (LLM) is a pre-trained generative text model trained at the top of mistral base model and boasting 7 billion parameters. This version extends the tokenization capability by increasing tokens by 20k. |
|
Additionally, it was Pretrained on 1.19 million documents sourced from madlad-400 (Tamil) [MADLAD-400 (Multilingual Audited Dataset: Low-resource And Document-level)](https://arxiv.org/abs/2309.04662). |
|
|
|
For full details of this model please read our [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/announcing-mistral-7b/). |
|
|
|
## Model Architecture |
|
|
|
Mistral-7B-v0.1 is a transformer model, with the following architecture choices: |
|
- Grouped-Query Attention |
|
- Sliding-Window Attention |
|
- Byte-fallback BPE tokenizer |
|
|
|
## Troubleshooting |
|
|
|
- If you see the following error: |
|
``` |
|
KeyError: 'mistral' |
|
``` |
|
- Or: |
|
``` |
|
NotImplementedError: Cannot copy out of meta tensor; no data! |
|
``` |
|
|
|
Ensure you are utilizing a stable version of Transformers, 4.34.0 or newer. |
|
|
|
## Notice |
|
|
|
Mistral 7B is a pretrained base model and therefore does not have any moderation mechanisms. |
|
|
|
## The Mistral AI Team |
|
|
|
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed. |