File size: 3,128 Bytes
505c4aa |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 |
---
base_model:
- Sao10K/L3.1-70B-Hanami-x1
- EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
- LatitudeGames/Wayfarer-Large-70B-Llama-3.3
- crestf411/L3.1-nemotron-sunfall-v0.7.0
- tokyotech-llm/Llama-3.1-Swallow-70B-Instruct-v0.3
- SicariusSicariiStuff/Negative_LLAMA_70B
- nbeerbower/llama3.1-kartoffeldes-70B
- deepseek-ai/DeepSeek-R1-Distill-Llama-70B
library_name: transformers
tags:
- mergekit
- merge
---
### exl3 quant
---
### check revisions for quants
---
# Genetic Lemonade Unleashed

Inspired to learn how to merge by the Nevoria series from [SteelSkull](https://huggingface.co/Steelskull).
This model is the result of a few dozen different attempts of learning how to merge.
## Model Comparison
Designed for RP and creative writing, all three models are focused around striking a balance between writing style, creativity and intelligence. The basic differences between the models are below.
| Version | Strength | Weakness |
|---------|----------------|----|
| **Unleashed** | Well balanced | Somewhat censored |
| Final | Fully uncensored | Least intelligent |
| Sunset | Well balanced, most intelligent | GPTisms / weakest writing style |
## SillyTavern Settings
[Llam@ception](https://huggingface.co/Konnect1221/The-Inception-Presets-Methception-LLamaception-Qwenception/tree/main/Llam%40ception) recommended for sane defaults if unsure, import them to SillyTavern and they're plug n play.
### Sampler Settings
- Temp: 0.9-1.0
- MinP: 0.03-0.05
- Dry: 0.8, 1.75, 4
Temperature last, neutralize other samplers. This model natively strikes a balance of creativity & intelligence.
### Instruct
Llama-3-Instruct-Names but you will need to uncheck "System same as user".
## Quants
### GGUF
- [Static quants by mradermacher](https://huggingface.co/mradermacher/L3.3-GeneticLemonade-Unleashed-70B-GGUF)
- [Imatrix quants by mradermacher](https://huggingface.co/mradermacher/L3.3-GeneticLemonade-Unleashed-70B-i1-GGUF)
### EXL2
- [4.5bpw](https://huggingface.co/zerofata/L3.3-GeneticLemonade-Unleashed-70B-4.5bpw-h6-exl2)
- [6bpw](https://huggingface.co/zerofata/L3.3-GeneticLemonade-Unleashed-70B-6bpw-h8-exl2)
## Merge Details
### Merge Method
This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method.
### merge_v6_base_E
```yaml
models:
- model: deepseek-ai/DeepSeek-R1-Distill-Llama-70B
- model: nbeerbower/llama3.1-kartoffeldes-70B
- model: tokyotech-llm/Llama-3.1-Swallow-70B-Instruct-v0.3
- model: SicariusSicariiStuff/Negative_LLAMA_70B
select_topk: .15
merge_method: sce
base_model: meta-llama/Llama-3.3-70B-Instruct
out_dtype: bfloat16
dype: float32
tokenizer:
source: base
```
### Genetic Lemonade Unleashed
```yaml
models:
- model: EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
- model: LatitudeGames/Wayfarer-Large-70B-Llama-3.3
- model: crestf411/L3.1-nemotron-sunfall-v0.7.0
- model: Sao10K/L3.1-70B-Hanami-x1
merge_method: sce
base_model: ./merge_v6_base_E
select_topk: 0.15
out_dtype: bfloat16
dype: float32
tokenizer:
source: union
``` |