GoldenLlama-3.1-8B

GoldenLlama-3.1-8B is a merge of the following models using mergekit:

🧩 Configuration


slices:
  - sources:
    - model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored
      layer_range: [0, 25]
  - sources:
    - model: NousResearch/Hermes-3-Llama-3.1-8B
      layer_range: [25, 32]
merge_method: passthrough
dtype: bfloat16

Downloads last month
5
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for bunnycore/GoldenLlama-3.1-8B

Quantizations
5 models