How to use

from transformers import AutoModelForCausalLM, AutoTokenizer, TextGenerationPipeline
model_path = 'fiveflow/KoLlama-3-8B-Instruct'
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path, 
                                                  device_map="auto",
                                                #   load_in_4bit=True,
                                                  low_cpu_mem_usage=True)

pipe = TextGenerationPipeline(model = model, tokenizer = tokenizer)
Downloads last month
1,616
Safetensors
Model size
8.03B params
Tensor type
FP16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for fiveflow/KoLlama-3-8B-Instruct

Finetuned
(636)
this model
Quantizations
3 models

Spaces using fiveflow/KoLlama-3-8B-Instruct 6