File size: 3,415 Bytes
1b4786f 3296d70 1b4786f 3296d70 1b4786f 587c208 1b4786f e2c453e 1b4786f d7487dc c6fa14a d7487dc c6fa14a d7487dc 1b4786f a877fa1 e925584 587c208 e925584 061d58a 587c208 35c0139 587c208 061d58a e925584 183a7f5 587c208 e925584 938a10e 55bfad0 938a10e e925584 1b4786f ea5120d 652d086 ea5120d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 |
---
language:
- ta
license: apache-2.0
tags:
- pretrained
datasets:
- Hemanth-thunder/tamil-madlad-400
pipeline_tag: text-generation
inference:
parameters:
temperature: 0.7
repetition_penalty: 1.15
---
# Model Card for Tamil-Mistral-7B-v0.1
The Tamil-Mistral-7B-v0.1 Large Language Model (LLM) is a pre-trained generative text model trained at the top of mistral base model 7 billion parameters. This is extends version of tokenization capability by increasing tamil tokens by 20k.
Additionally, it was Pretrained on 1.19 million Tamil documents sourced from madlad-400 (Tamil) [MADLAD-400 (Multilingual Audited Dataset: Low-resource And Document-level)](https://arxiv.org/abs/2309.04662).
pretraining time: 145 hours (GPU NVIDIA RTX A6000 48GB)
## Mistral model details
For full details of this model please read our [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/announcing-mistral-7b/).
## Model Architecture
Mistral-7B-v0.1 is a transformer model, with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
[Kaggle Demo](https://www.kaggle.com/code/hemanthkumar21/tamil-mistral-7b-v0-1-demo/)
#### Running the model on a GPU 16GB
```python
import torch
from transformers import (AutoModelForCausalLM,AutoTokenizer,TextStreamer,pipeline)
model = AutoModelForCausalLM.from_pretrained("Hemanth-thunder/Tamil-Mistral-7B-v0.1",device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("Hemanth-thunder/Tamil-Mistral-7B-v0.1",add_prefix_space=True)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"
streamer = TextStreamer(tokenizer)
pipe = pipeline("text-generation" ,model=model, tokenizer=tokenizer ,do_sample=True, repetition_penalty=1.15,top_p=0.95,streamer=streamer)
pipe("ஐபிஎல் தொடரில் மும்பை இந்தியன்ஸ் அணி ",max_length=50)
```
```generated_text
ஐபிஎல் தொடரில் மும்பை இந்தியன்ஸ் அணி -3வது இடத்திற்கு முன்னேறி இருக்கிறது, இதனால் பிளே ஆஃப் வாய்ப்பை உறுதி செய்ய வேண்டும்.
இன்னும் 11 புள்ளிகள் மட்டுமே மீதமுள்ளது.சென்னை சூப்பர் கிங்சுக்கு 12 புள்ளிகளில் உள்ளது.
அதன் கடைசி லீக் போட்டி ஜூன் 23-ம் தேதி சென்னையில் நடைபெறுகிறது.
```
# Loss
<!-- Provide a quick summary of what the model is/does. -->

## Troubleshooting
- If you see the following error:
```
KeyError: 'mistral'
```
- Or:
```
NotImplementedError: Cannot copy out of meta tensor; no data!
```
Ensure you are utilizing a stable version of Transformers, 4.34.0 or newer.
## Notice
Mistral 7B is a pretrained base model and therefore does not have any moderation mechanisms.
# How to Cite
```bibtext
@misc{Tamil-Mistral-7B-v0.1,
url={[https://huggingface.co/Hemanth-thunder/Tamil-Mistral-7B-v0.1]https://huggingface.co/Hemanth-thunder/Tamil-Mistral-7B-v0.1)},
title={Tamil-Mistral-7B-v0.1},
author={"hemanth kumar"}
}
``` |