--- base_model: - bunnycore/Llama-3.2-3b-RP-Toxic-Fuse - bunnycore/Llama-3.2-3B-ToxicKod - bunnycore/Llama-3.2-3b-RP-Toxic-R1 - unsloth/Llama-3.2-3B-Instruct - bunnycore/Llama-3.2-3b-RP-Toxic-R1-lora - bunnycore/Llama-3.2-3b-RP-Toxic-Fuse - bunnycore/Llama-3.2-3b-RP-Toxic-R1-lora - bunnycore/Llama-3.2-3B-KodCode-R1 library_name: transformers tags: - mergekit - merge --- I expect this model to be uncensored and also capable of role-playing, and a little bit of reasoning. ## Thinking Mode: ``` Always think and do evidence-based reasoning, and always reason before any answer. Always self-critique and self-question, and always wrap your thinking in answer here ``` ### Models Merged The following models were included in the merge: * [bunnycore/Llama-3.2-3b-RP-Toxic-Fuse](https://huggingface.co/bunnycore/Llama-3.2-3b-RP-Toxic-Fuse) * [bunnycore/Llama-3.2-3B-ToxicKod](https://huggingface.co/bunnycore/Llama-3.2-3B-ToxicKod) * [bunnycore/Llama-3.2-3b-RP-Toxic-R1](https://huggingface.co/bunnycore/Llama-3.2-3b-RP-Toxic-R1) * [bunnycore/Llama-3.2-3b-RP-Toxic-Fuse](https://huggingface.co/bunnycore/Llama-3.2-3b-RP-Toxic-Fuse) + [bunnycore/Llama-3.2-3b-RP-Toxic-R1-lora](https://huggingface.co/bunnycore/Llama-3.2-3b-RP-Toxic-R1-lora) * [bunnycore/Llama-3.2-3B-KodCode-R1](https://huggingface.co/bunnycore/Llama-3.2-3B-KodCode-R1) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: bunnycore/Llama-3.2-3b-RP-Toxic-Fuse+bunnycore/Llama-3.2-3b-RP-Toxic-R1-lora parameters: weight: 0.3 - model: bunnycore/Llama-3.2-3b-RP-Toxic-R1 - model: bunnycore/Llama-3.2-3B-ToxicKod - model: bunnycore/Llama-3.2-3B-KodCode-R1 - model: bunnycore/Llama-3.2-3b-RP-Toxic-Fuse base_model: unsloth/Llama-3.2-3B-Instruct+bunnycore/Llama-3.2-3b-RP-Toxic-R1-lora merge_method: model_stock parameters: dtype: bfloat16 tokenizer_source: unsloth/Llama-3.2-3B-Instruct ```