GGUF Quantised Models for Qwen2.5_7B_Instruct_Johnny_Silverhand_Merged
This repository contains GGUF format model files for lewiswatson/Qwen2.5-7B-Instruct_Johnny_Silverhand_Merged
, quantised.
Original Model
The original fine-tuned model used to generate these quantisations can be found here: lewiswatson/Qwen2.5-7B-Instruct_Johnny_Silverhand_Merged
Provided Files (GGUF)
File | Size |
---|---|
Qwen2.5-7B-Instruct_Johnny_Silverhand_Merged.IQ4_XS.gguf |
3.96 GB |
Qwen2.5-7B-Instruct_Johnny_Silverhand_Merged.Q2_K.gguf |
2.81 GB |
Qwen2.5-7B-Instruct_Johnny_Silverhand_Merged.Q3_K_L.gguf |
3.81 GB |
Qwen2.5-7B-Instruct_Johnny_Silverhand_Merged.Q3_K_M.gguf |
3.55 GB |
Qwen2.5-7B-Instruct_Johnny_Silverhand_Merged.Q3_K_S.gguf |
3.25 GB |
Qwen2.5-7B-Instruct_Johnny_Silverhand_Merged.Q4_K_M.gguf |
4.36 GB |
Qwen2.5-7B-Instruct_Johnny_Silverhand_Merged.Q4_K_S.gguf |
4.15 GB |
Qwen2.5-7B-Instruct_Johnny_Silverhand_Merged.Q5_K_M.gguf |
5.07 GB |
Qwen2.5-7B-Instruct_Johnny_Silverhand_Merged.Q5_K_S.gguf |
4.95 GB |
Qwen2.5-7B-Instruct_Johnny_Silverhand_Merged.Q6_K.gguf |
5.82 GB |
Qwen2.5-7B-Instruct_Johnny_Silverhand_Merged.Q8_0.gguf |
7.54 GB |
Qwen2.5-7B-Instruct_Johnny_Silverhand_Merged.f16.gguf |
14.19 GB |
This repository was automatically created using a script on 2025-04-14.
- Downloads last month
- 137
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for lewiswatson/Qwen2.5-7B-Instruct_Johnny_Silverhand_Merged-GGUF
Base model
Qwen/Qwen2.5-7B
Finetuned
Qwen/Qwen2.5-7B-Instruct