Adding Evaluation Results
#77 opened over 1 year ago
by
leaderboard-pr-bot

Update README.md
#76 opened over 1 year ago
by
luv2261
AttributeError, When using Specific GPU
#75 opened over 1 year ago
by
Rohith1016
Tokenizer adds space between sentence start and instruction start
1
#74 opened over 1 year ago
by
ldavid
How to pass system prompts when using Mistral?
6
#73 opened over 1 year ago
by
luissimoes
Mistral with LangChain What is the Model_Type?
#72 opened over 1 year ago
by
luissimoes
Ready to use Mistral-7B-Instruct-v0.1 with Langchain via Huggingface Inference API
1
#70 opened over 1 year ago
by
unixguru2k
Error while doing inference
3
1
#69 opened over 1 year ago
by
SivaPrasad02
Often "\_" generated for "_" in code generation.
#68 opened over 1 year ago
by
heejune
Template for chat
2
#67 opened over 1 year ago
by
Coinman
Mistral doesn't have a `pad_token_id`? 🤔
2
#66 opened over 1 year ago
by
ingo-m

Multilingual Support
1
#65 opened over 1 year ago
by
abrehmaaan
How do i get the confidence score of an answer? I want to use this model for question answering task.
3
#64 opened over 1 year ago
by
prajwalJumde
Align tokenizer_config chat template with docs.mistral.ai chat template
1
#63 opened over 1 year ago
by
eliseobao

Datasets used
12
2
#62 opened over 1 year ago
by
dlowl
Doesn't work when I deploy in the Spaces
#61 opened over 1 year ago
by
asg145
[Request] Train a Medusa version of the model using the original training data
#58 opened over 1 year ago
by
narai
`max_position_embeddings=32768` with "attention span of 131K tokens"
1
#57 opened over 1 year ago
by
Nadav-Timor

Mistral en français
#56 opened over 1 year ago
by
YorelNation

Is there a way to access the decoder portion of this model?
#54 opened over 1 year ago
by
doofango
apply_chat_template result of Mistral is not restrictly align to the template on its website
1
#53 opened over 1 year ago
by
Annorita
Update config.json
#52 opened over 1 year ago
by
lukelv
[AUTOMATED] Model Memory Requirements
#50 opened over 1 year ago
by
model-sizer-bot
Prompt template for question answering
19
#49 opened over 1 year ago
by
gxxxz
Ideal tokenizer with Sentence Transformer?
#48 opened over 1 year ago
by
mahimairaja

Which padding side to choose while finetuning
12
4
#47 opened over 1 year ago
by
parikshit1619
killed message
1
#46 opened over 1 year ago
by
TwoCats17

Install latest BitsandBytes
13
#44 opened over 1 year ago
by
codegood
System Prompt
3
#41 opened over 1 year ago
by
sakshat98
Mistral rules
1
1
#40 opened over 1 year ago
by
iNeverLearnedHowToRead
Langchain with Mistral
1
#38 opened over 1 year ago
by
19Peppe95
Update README.md
#36 opened over 1 year ago
by
Ahelaraj
ValueError: Please specify `target_modules` in `peft_config`
3
#34 opened over 1 year ago
by
Tapendra
error while running the model
1
#33 opened over 1 year ago
by
adityaasish
Possibilities injecting custom system into mistral?
5
#32 opened over 1 year ago
by
TikaToka
Deployment on Sagemaker endpoint with text generation inference container does not work
1
#30 opened over 1 year ago
by
AnOtterDeveloper
Ready to use Mistral-7B-Instruct-v0.1-GGUF model as OpenAI API compatible endpoint
3
13
#29 opened over 1 year ago
by
limcheekin
Any idea on how to do few shot prompting with the tokenizer provided in the transformers library?
#28 opened over 1 year ago
by
ajinkyaathlye
Problem with using Mistralai model api from Huggingface
1
6
#26 opened over 1 year ago
by
Hawks101
Just a quick test, but how good is it?
6
#25 opened over 1 year ago
by
Stelarion
Does it work with local open interpreter, and how many gigs of ram is required?
1
#24 opened over 1 year ago
by
aiworld44
Sharing a script to run a local streaming chat interface
2
#20 opened over 1 year ago
by
tarruda
Is the Instruct version also commercial?
9
#18 opened over 1 year ago
by
jianguozhang001
highest score yet out of a plain vanilla 7B model!!!!
1
#17 opened over 1 year ago
by
silvacarl

What is the context size of this model?
6
#15 opened over 1 year ago
by
saikatkumardey
How to fine tune ?
14
#12 opened over 1 year ago
by
NickyNicky

Missing parameters
2
2
#10 opened over 1 year ago
by
NS-Y
How is this model different from Llama 2-7B?
7
#8 opened over 1 year ago
by
dheerajpai
How do I run this model?
18
#7 opened over 1 year ago
by
dheerajpai