matrixportal's picture
Upload README.md with huggingface_hub
90db729 verified
|
raw
history blame
3.77 kB
metadata
base_model: ytu-ce-cosmos/Turkish-Llama-8b-Instruct-v0.1
language:
  - tr
license: llama3
pipeline_tag: text-generation
tags:
  - Turkish
  - turkish
  - Llama
  - Llama3
  - llama-cpp
  - matrixportal

matrixportal/Turkish-Llama-8b-Instruct-v0.1-GGUF

This model was converted to GGUF format from ytu-ce-cosmos/Turkish-Llama-8b-Instruct-v0.1 using llama.cpp via the ggml.ai's all-gguf-same-where space. Refer to the original model card for more details on the model.

βœ… Quantized Models Download List

✨ Recommended for CPU: Q4_K_M | ⚑ Recommended for ARM CPU: Q4_0 | πŸ† Best Quality: Q8_0

πŸš€ Download πŸ”’ Type πŸ“ Notes
Download Q2_K Basic quantization
Download Q3_K_S Small size
Download Q3_K_M Balanced quality
Download Q3_K_L Better quality
Download Q4_0 Fast on ARM
Download Q4_K_S Fast, recommended
Download Q4_K_M ⭐ Best balance
Download Q5_0 Good quality
Download Q5_K_S Balanced
Download Q5_K_M High quality
Download Q6_K πŸ† Very good quality
Download Q8_0 ⚑ Fast, best quality
Download F16 Maximum accuracy

πŸ’‘ Tip: Use F16 for maximum precision when quality is critical