Upload ./README.md with huggingface_hub
Browse files
README.md
ADDED
@@ -0,0 +1,100 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model:
|
3 |
+
- Sao10K/L3.1-70B-Hanami-x1
|
4 |
+
- EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
|
5 |
+
- LatitudeGames/Wayfarer-Large-70B-Llama-3.3
|
6 |
+
- crestf411/L3.1-nemotron-sunfall-v0.7.0
|
7 |
+
- tokyotech-llm/Llama-3.1-Swallow-70B-Instruct-v0.3
|
8 |
+
- SicariusSicariiStuff/Negative_LLAMA_70B
|
9 |
+
- nbeerbower/llama3.1-kartoffeldes-70B
|
10 |
+
- deepseek-ai/DeepSeek-R1-Distill-Llama-70B
|
11 |
+
library_name: transformers
|
12 |
+
tags:
|
13 |
+
- mergekit
|
14 |
+
- merge
|
15 |
+
---
|
16 |
+
### exl3 quant
|
17 |
+
---
|
18 |
+
### check revisions for quants
|
19 |
+
---
|
20 |
+
|
21 |
+
# Genetic Lemonade Unleashed
|
22 |
+
|
23 |
+

|
24 |
+
|
25 |
+
Inspired to learn how to merge by the Nevoria series from [SteelSkull](https://huggingface.co/Steelskull).
|
26 |
+
|
27 |
+
This model is the result of a few dozen different attempts of learning how to merge.
|
28 |
+
|
29 |
+
## Model Comparison
|
30 |
+
|
31 |
+
Designed for RP and creative writing, all three models are focused around striking a balance between writing style, creativity and intelligence. The basic differences between the models are below.
|
32 |
+
|
33 |
+
| Version | Strength | Weakness |
|
34 |
+
|---------|----------------|----|
|
35 |
+
| **Unleashed** | Well balanced | Somewhat censored |
|
36 |
+
| Final | Fully uncensored | Least intelligent |
|
37 |
+
| Sunset | Well balanced, most intelligent | GPTisms / weakest writing style |
|
38 |
+
|
39 |
+
## SillyTavern Settings
|
40 |
+
|
41 |
+
[Llam@ception](https://huggingface.co/Konnect1221/The-Inception-Presets-Methception-LLamaception-Qwenception/tree/main/Llam%40ception) recommended for sane defaults if unsure, import them to SillyTavern and they're plug n play.
|
42 |
+
|
43 |
+
### Sampler Settings
|
44 |
+
- Temp: 0.9-1.0
|
45 |
+
- MinP: 0.03-0.05
|
46 |
+
- Dry: 0.8, 1.75, 4
|
47 |
+
|
48 |
+
Temperature last, neutralize other samplers. This model natively strikes a balance of creativity & intelligence.
|
49 |
+
|
50 |
+
### Instruct
|
51 |
+
|
52 |
+
Llama-3-Instruct-Names but you will need to uncheck "System same as user".
|
53 |
+
|
54 |
+
## Quants
|
55 |
+
|
56 |
+
### GGUF
|
57 |
+
- [Static quants by mradermacher](https://huggingface.co/mradermacher/L3.3-GeneticLemonade-Unleashed-70B-GGUF)
|
58 |
+
- [Imatrix quants by mradermacher](https://huggingface.co/mradermacher/L3.3-GeneticLemonade-Unleashed-70B-i1-GGUF)
|
59 |
+
|
60 |
+
### EXL2
|
61 |
+
- [4.5bpw](https://huggingface.co/zerofata/L3.3-GeneticLemonade-Unleashed-70B-4.5bpw-h6-exl2)
|
62 |
+
- [6bpw](https://huggingface.co/zerofata/L3.3-GeneticLemonade-Unleashed-70B-6bpw-h8-exl2)
|
63 |
+
|
64 |
+
## Merge Details
|
65 |
+
### Merge Method
|
66 |
+
|
67 |
+
This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method.
|
68 |
+
|
69 |
+
### merge_v6_base_E
|
70 |
+
```yaml
|
71 |
+
models:
|
72 |
+
- model: deepseek-ai/DeepSeek-R1-Distill-Llama-70B
|
73 |
+
- model: nbeerbower/llama3.1-kartoffeldes-70B
|
74 |
+
- model: tokyotech-llm/Llama-3.1-Swallow-70B-Instruct-v0.3
|
75 |
+
- model: SicariusSicariiStuff/Negative_LLAMA_70B
|
76 |
+
select_topk: .15
|
77 |
+
merge_method: sce
|
78 |
+
base_model: meta-llama/Llama-3.3-70B-Instruct
|
79 |
+
out_dtype: bfloat16
|
80 |
+
dype: float32
|
81 |
+
tokenizer:
|
82 |
+
source: base
|
83 |
+
```
|
84 |
+
|
85 |
+
### Genetic Lemonade Unleashed
|
86 |
+
|
87 |
+
```yaml
|
88 |
+
models:
|
89 |
+
- model: EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
|
90 |
+
- model: LatitudeGames/Wayfarer-Large-70B-Llama-3.3
|
91 |
+
- model: crestf411/L3.1-nemotron-sunfall-v0.7.0
|
92 |
+
- model: Sao10K/L3.1-70B-Hanami-x1
|
93 |
+
merge_method: sce
|
94 |
+
base_model: ./merge_v6_base_E
|
95 |
+
select_topk: 0.15
|
96 |
+
out_dtype: bfloat16
|
97 |
+
dype: float32
|
98 |
+
tokenizer:
|
99 |
+
source: union
|
100 |
+
```
|