File size: 5,747 Bytes
bceaf1b
a5ed5e9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bceaf1b
a5ed5e9
 
f973eab
 
bceaf1b
7afb999
 
 
f973eab
a5ed5e9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bceaf1b
a5ed5e9
 
 
 
 
bceaf1b
a5ed5e9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bceaf1b
a5ed5e9
 
 
bceaf1b
 
a5ed5e9
 
 
 
bceaf1b
a5ed5e9
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
---
license: apache-2.0
tags:
- unsloth
- Uncensored
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- roleplay
- conversational
datasets:
- iamketan25/roleplay-instructions-dataset
- N-Bot-Int/Iris-Uncensored-R1
- N-Bot-Int/Moshpit-Combined-R2-Uncensored
- N-Bot-Int/Mushed-Dataset-Uncensored
- N-Bot-Int/Muncher-R1-Uncensored
- N-Bot-Int/Millia-R1_DPO
language:
- en
base_model:
- N-Bot-Int/MiniMaid-L1
pipeline_tag: text-generation
library_name: peft
metrics:
- character
- bleu
- rouge
---
# THIS IS THE FINAL MiniMaid-L Series, This is because we've hit the final Ceiling for a 1B model! Thank you so much for your Support!
  - If you loved our Models, then please consider donating and supporting us through Ko-fi!
  - [![ko-fi](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/J3J61D8NHV)
  
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6633a73004501e16e7896b86/TtK1uzcfc0FL0JFHWYtod.png)
# MiniMaid-L3
- Introducing MiniMaid-L3 model! Our brand new finetuned MiniMaid-L2 Architecture, allowing for an Even More Coherent and
  Immersive Roleplay through the Use of Knowledge distillation!

- MiniMaid-L3 is a Small Update to L2, Which uses Knowledge distillation to combine our L2 Architecture, and A Popular 
  Roleplaying Model named MythoMax, which also uses a Combanant Technology to Combine models and create MythoMax-7B,
  MiniMaid-L3 on the other hand is a distillation of MiniMaid-L2, combined with using MythoMax Knowledge Distillation,
  which created MiniMaid-L3, a More Capable Model that Outcompete its descendance in both roleplaying scenarios
  And even Knock MiniMaid-L2's BLEU scoring!


# MiniMaid-L1 Base-Model Card Procedure:
- **MiniMaid-L1** achieve a good Performance through process of DPO and Combined Heavy Finetuning, To Prevent Overfitting,
  We used high LR decays, And Introduced Randomization techniques to prevent the AI from learning and memorizing,
  However since training this on Google Colab is difficult, the Model might underperform or underfit on specific tasks
  Or overfit on knowledge it manage to latched on! However please be guided that we did our best, and it will improve as we move onwards!

- MiniMaid-L3 is Another Instance of Our Smallest Model Yet! if you find any issue, then please don't hesitate to email us at:
  [[email protected]](mailto:[email protected])
  about any overfitting, or improvements for the future Model **V4**,
  Once again feel free to Modify the LORA to your likings, However please consider Adding this Page
  for credits and if you'll increase its **Dataset**, then please handle it with care and ethical considerations


  
- MiniMaid-L3 is
  - **Developed by:** N-Bot-Int
  - **License:** apache-2.0
  - **Parent Model from model:** unsloth/llama-3.2-3b-instruct-unsloth-bnb-1bit
  - **Dataset Combined Using:** NKDProtoc(Propietary Software)

- MiniMaid-L3 Official Metric Score
    ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6633a73004501e16e7896b86/YMsycgud2ofbj4WLR-C4V.png)
    - Metrics Made By **ItsMeDevRoland**
      Which compares:
       - **MiniMaid-L2 GGUFF**
       - **MiniMaid-L3 GGUFF**
      Which are All Ranked with the Same Prompt, Same Temperature, Same Hardware(Google Colab),
      To Properly Showcase the differences and strength of the Models

    - **Visit Below to See details!**

---
# 🧵 MiniMaid-L3: Slower Steps, Deeper Stories — The Immersive Upgrade
> "She’s more grounded, more convincing — and when it comes to roleplay, she’s in a league of her own."
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6633a73004501e16e7896b86/nae9F2HNg1fRDVegkkA9j.png)
---

# MiniMaid-L3 doesn’t just iterate — she elevates. Built on L2’s disciplined architecture, L3 doubles down on character immersion and emotional coherence, refining every line she delivers.
- 💬 Roleplay Evaluation (v2)
- 🧠 Character Consistency: 0.54 → 0.55 (+)
- 🌊 Immersion: 0.59 → 0.66 (↑)
- 🎭 Overall RP Score: 0.72 → 0.75
> L3’s immersive depth marks a new high in believability and emotional traction — she's not just playing a part, she becomes it.

# 📊 Slower, But Smarter
- 🕒 Inference Time: 39.1s (↑ from 34.5s)
- ⚡ Tokens/sec: 6.61 (slight dip)
- 📏 BLEU/ROUGE-L: Mixed — slight BLEU gain, ROUGE-L softened
> Sure, she takes her time — but it’s worth it. L3 trades a few milliseconds for measured, thoughtful outputs that stick the landing every time.

# 🎯 Refined Roleplay, Recalibrated Goals
  - MiniMaid-L3 isn’t trying to be the fastest. She’s here to be real — holding character, deepening immersion, and generating stories that linger.
  - 🛠️ Designed For:
    - Narrative-focused deployments
    - Long-form interaction and memory retention
    - Low-size, high-fidelity simulation
---
> “MiniMaid-L3 sacrifices a bit of speed to speak with soul. She’s no longer just reacting — she’s inhabiting. It’s not about talking faster — it’s about meaning more.”
# MiniMaid-L3 is the slow burn that brings the fire.
---

- # Notice
  - **For a Good Experience, Please use**
    - Low temperature 1.5, min_p = 0.1 and max_new_tokens = 128


- # Detail card:
  - Parameter
    - 1 Billion Parameters
    - (Please visit your GPU Vendor if you can Run 1B models)

  - Finetuning tool:
   - Unsloth AI
     - This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
    [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
   - Fine-tuned Using:
    - Google Colab