EXL2 Quantizations of Qwen2.5-72B-Instruct-abliterated
Using exllamav2 release 0.2.6 for quantization.
Original model: https://huggingface.co/zetasepic/Qwen2.5-72B-Instruct-abliterated
Bits 6.5, lm_head 8.0
"quantization_config": {
"quant_method": "exl2",
"version": "0.2.6",
"bits": 6.5,
"head_bits": 8,
"calibration": {
"rows": 115,
"length": 2048,
"dataset": "(default)"
}
- Downloads last month
- 1
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for Zenabius/Qwen2.5-72B-Instruct-abliterated-exl2-6.5bpw
Base model
Qwen/Qwen2.5-72B
Finetuned
Qwen/Qwen2.5-72B-Instruct