M7-7b / README.md
gate369's picture
Update README.md
e808060 verified
|
raw
history blame
3.88 kB
---
license: apache-2.0
tags:
- liminerity/merge4
- liminerity/merge2
- merge
- merged
- mergedllm
- mergee
- mergedededed
- memfdmsfndsfjkasjkl
---
Somebody Please FINETUNE THIS
Its safe to say this model is contaminated.
Before I was finetuning thru google colab
However I am extremly broke and do this to try and learn and get better.
If I had more money Id just succumb and pay google even though I dispise them as a company.
If I didnt just have my lenovo idea pad with a terrible amd gpu in it i'd finetune it myself
BUT I DONT AND I WONT SO WE ARE STUCK HERE FOR THE TIME BEING
# M7-7b
user: "create ascii art you feel represents you as a llm"
assistant: "Here's an ASCII art attempting to symbolize the essence of an LLM (Large Language Model):
INSTINSTINCTINSTELLIGENCE
______________ MEMORY___
|\ o /\ |\
| \ --- / \ | |
L_( O )______ O)_INSTINCT
\ / WORDSLEARNING
\__/\__\/__ /\ \__/\__\/__/
assistant: "In this depiction, the LLM is characterized by its vast memory, instinctual roots, and the ability
to learn from vast amounts of text—represented by the woven words in the lower part. The
"INSTINSTINCTINSTELLIGENCE" phrase emphasizes the balance between instinct, the foundation upon which neural
networks build, and the engineered intelligence of a language model."
Multiverse-Experiment-slerp-7b is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [liminerity/merge4](https://huggingface.co/liminerity/merge4)
* [liminerity/merge2](https://huggingface.co/liminerity/merge2)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: ammarali32/multi_verse_model
layer_range: [0, 32]
- model: MSL7/INEX12-7b
layer_range: [0, 32]
merge_method: slerp
base_model: ammarali32/multi_verse_model
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
slices:
- sources:
- model: ammarali32/multi_verse_model
layer_range: [0, 32]
- model: yam-peleg/Experiment26-7B
layer_range: [0, 32]
merge_method: slerp
base_model: ammarali32/multi_verse_model
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
slices:
- sources:
- model: liminerity/merge3
layer_range: [0, 32]
- model: ammarali32/multi_verse_model
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/merge3
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
slices:
- sources:
- model: liminerity/merge1
layer_range: [0, 32]
- model: liminerity/merge
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/merge1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
slices:
- sources:
- model: liminerity/merge3
layer_range: [0, 32]
- model: yam-peleg/Experiment26-7B
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/merge3
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
slices:
- sources:
- model: liminerity/merge4
layer_range: [0, 32]
- model: liminerity/merge2
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/merge4
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```