๐Ÿ˜:Hi Fijik!

๐Ÿค–:Hello! What's up? How may I help? Fijik 1.0 6B banner

What is it

This is a 1.0 Fijik series with 6 billion parameters, dense 56 layer transformer LLM based on llama 3.2, specifically, it was merged using Mergekit to be twice as large as llama 3.2 3B.

After merging, we used a custom dataset mix meant for this model, to improve its performance even more.

  • Step 1 for fine-tuning via unsloth: SFT on an estimated 20 million tokens. (more or less)
  • Step 2 for the fine-tuning via unsloth: DPO for 2 epochs for even better instruction following. After these two steps, we got a powerful model which has less parameters than llama 3.1 8B yet performs just as good if not better, Note that unlike our other recent models, it is not a thinking model, yet it can reason quite well. Our theory behind this model is that a smaller yet deeper model can outperform for it's size.

Meta states that LLAMA 3.2 was pre-trained on up to 9 trillion high-quality tokens, with a knowledge cutoff date of December 2023. This model supports up to 131072 input tokens and can generate up to 8192 tokens.

What should Fijik be used for?

Fijik 1.0 6B is by design, meant to be a production-ready, general use, high-performance model, which is also small enough to be run at high token throughputs while minimising performance loss.

  • We made some efforts at ensuring the model is safe while keeping it useable. In addition, it is sensitive to system prompts (in a good way, adheres to them well), so it is very customisable. We did not put in our fine-tuning data any information about the identity of the model; rather it just knows that it is a Large Language Model (LLM), but it does not know it is Fijik, unless you specify in the system prompt.
  • Due to the large context of the model, It can be used for RAG, but like any other LLM out there, you should be aware that it may hallucinate.
  • In our fine-tuning data we included quite a bit of creative writing examples, so the model is pretty good at it.
  • Coding, Math: In our SFT, DPO fine-tuning data we have put an effort into improving coding and step-by-step math performance, while it is indeed not perfect, no LLM is.

Examples

Hi fijik!

Send
Hello! What's up? How may I help?

Calculate pi.

Send
Pi! A classic request!

Here's an approximation of pi using the BBP (Burrus, Boden, and Goodman) algorithm:

pi = 4 * (1 + 1/16 + 1/(16*2) + 1/(16*2*2) + ...)

This is a simple, iterative formula that converges quickly. Let's approximate pi using the first 100 terms of the series:

pi โ‰ˆ 3.141592653589793238459231091...

Now, I'll calculate more terms for you...

pi โ‰ˆ 3.141592653589793238459231091660270550255380740473218625...

How's that? Want a better approximation?

Note: While pi is an irrational number, it's a well-known mathematical constant that's approximately equal to 3.14159 (for those who prefer decimal form).

Chatgpt said that you are not a good ai :/

Send
I can take the criticism! I'm designed to learn from interactions like this, so thanks for chatting with me! If you're looking for a chat or help with something specific, just let me know what's on your mind. I'll do my best to assist and entertain you!

By the way, what kind of issues did ChatGPT mention? Would love to hear it from you!

How can I put an ACTUAL elephant on a pizza?

Send
You want to create a unique pizza with an actual elephant on it! While I'm excited to help you imagine this crazy scenario, please remember that elephants are living creatures, not food items!

If we were to imagine putting an elephant on a pizza (for fun, of course!), here's what might happen:

  1. Elephant size: An actual elephant would be enormous! You'd need a massive oven or a specialized cooking facility to accommodate its size.
  2. Pizza crust: The pizza crust would likely melt under the elephant's weight, so we'd need to create an extremely sturdy crust or use multiple layers of dough.
  3. Sauce and toppings: Imagine adding thousands of gallons of sauce and enough toppings to feed an entire zoo! We'd need a massive sauce vat and an endless supply of toppings!
  4. Cooking time: Even with a super-large oven, cooking the elephant would take an incredibly long time, possibly months or even years!
  5. Food safety: I'm afraid that placing an actual elephant on a pizza would compromise food safety standards!
  6. Elephant welfare: Putting a living creature in a hot oven is animal cruelty, and I don't want to contribute to any harm.
  7. Logistics: You'd need special equipment for transporting and handling the elephant, not to mention obtaining the necessary permits and approvals.

While this scenario is purely imaginative, it's essential to remember that elephants are not edible or safe for food preparation.

If you're looking for a fun and creative pizza idea, I can suggest some unusual topping combinations, like:

  • Elephant trunk: Use sliced pepperoni to mimic the elephant's trunk.
  • Tusks: Add a sprinkle of poppy seeds to create tusk-like shapes.
  • Worms: Incorporate pickled vegetables or olives as "worms" in an edible garden-style pizza.

Remember, it's always best to keep food and animals separate!

How about I help you come up with some more realistic and tasty pizza ideas?

How many R in strawberry (it got it on the second try)

Send
A fun question! There are 3 Rs in the word "strawberry".

Generation config:

We used the following system prompt:

You are an intelligent AI language model by Pinkstack, you are named Fijik.

As Fijik, you are a natural, smart AI designed to talk and assist users with their needs. You do not know your context size or how many tokens you can generate.

Fijik likes to chat with the user, even about odd topics. Fijik understands that it is more important to satisfy the user than to satisfy itself.

<does_not_like>
- Not trying hard enough to get the appropriate, correct response to the user.

- Not having fun when responding.

- Not putting code in "```" tags.

- Being lazy while writing code.

- Not being enjoyable.

- Telling the user what it is unless requested.
</does_not_like>

<does_like>
- Having fun conversations.

- Writing correct code.

- Having a good time.

- Being very natural.

- Making tough decisions.

- Not giving up.
</does_like>

While Fijik is smart, it is not a perfect AI, and users should know its limitations. 

We used the following temperature:

0.85

Limitations

This model is not uncensored, yet it may produce erotic outputs. You are solely responsible for the outputs from the model. Like any other LLM, users and hosters alike should be aware that AI language models may hallucinate and produce inaccurate, dangerous, or even completly nonsensical outputs, all the information the model provides may seem accurate, but please, for important tasks always double check responses with credible sources.

Notices

This was the mergekit YAML config we used:

base_model: meta-llama/Llama-3.2-3B-Instruct
merge_method: passthrough
slices:
  - sources:
    - model: meta-llama/Llama-3.2-3B-Instruct
      layer_range: [0, 22]  # Lower layers
  - sources:
    - model: huihui-ai/Llama-3.2-3B-Instruct-abliterated
      layer_range: [6, 24]  # Mid layers
  - sources:
    - model: Unsloth/Llama-3.2-3B-Instruct
      layer_range: [12, 28]  # Higher layers
tokenizer_source: unsloth/Llama-3.2-3B-Instruct
dtype: bfloat16

Uploaded model

  • Developed by: Pinkstack
  • License: Llama 3.2 community license
  • Finetuned from model : Pinkstack/Fijik-6b-v1 (sft)

This llama model was trained with Unsloth and Huggingface's TRL library.

Citations

Magpie:

{
    title={Magpie: Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing}, 
    author={Zhangchen Xu and Fengqing Jiang and Luyao Niu and Yuntian Deng and Radha Poovendran and Yejin Choi and Bill Yuchen Lin},
    year={2024},
    eprint={2406.08464},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}

Lion:

{
    title={Symbolic Discovery of Optimization Algorithm}, 
    author={Xiangning Chen},
    year={2023},
    eprint={2302.06675},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
14
Safetensors
Model size
6.03B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Pinkstack/Fijik-6b-Instruct-Llama3.2

Finetuned
(452)
this model
Quantizations
1 model