r.png

Octantis-QwenR1-1.5B

Octantis-QwenR1-1.5B is a small reasoning model that enhances the reasoning capabilities of edge large language models (LLMs) using reinforcement learning (RL). Fine-tuned from Pisces-QwenR1-1.5B, it brings refined improvements in logical reasoning, computation, and lightweight coding, making it well-suited for deployment on resource-constrained devices.

Key Improvements

  1. Advanced Reasoning via RL:
    Built to support symbolic reasoning, logical deduction, and structured problem-solving with high efficiency — specifically optimized for real-time use on edge systems.

  2. Compact Coding Assistant:
    Enhanced understanding of multiple programming paradigms and syntax across Python, JavaScript, C++, and more. Supports in-situ code generation and debugging for embedded coding scenarios.

  3. Error Detection & Correction:
    Identifies logic errors, malformed data structures (e.g., JSON, XML), and provides corrections quickly — with lightweight inference and minimal latency.

  4. Instruction Following & Precision:
    Tuned to follow multi-step instructions with improved contextual memory, offering consistent and precise responses across a variety of prompt types.

  5. Extended Context Compatibility:
    Maintains support for 128K token inputs and 8K token outputs, while remaining lean enough for real-time edge usage with low power consumption.

Quickstart with Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "prithivMLmods/Octantis-QwenR1-1.5B"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "What is a generator function in Python? Explain with an example."
messages = [
    {"role": "system", "content": "You are a helpful and concise AI assistant skilled in programming and reasoning."},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]

Intended Use

  1. Edge LLM Applications:
    Built for embedded AI agents, mobile inference, and low-latency chatbots on constrained hardware.

  2. Compact Reasoning Tasks:
    Effective for real-time logical reasoning, rule-based deduction, and lightweight cognitive tasks.

  3. Educational & Programming Tools:
    Helpful for teaching basic programming and debugging in interactive, constrained environments (e.g., IoT, robotics kits).

  4. Lightweight Conversational Agents:
    Enables responsive, intelligent interactions in edge-deployed customer service bots, support kiosks, and automation systems.

  5. Multilingual Mini-NLP Tasks:
    Supports basic multilingual tasks such as translation, summarization, and information retrieval across multiple languages.

  6. Structured Format Generation:
    Can generate JSON, Markdown, tables, or tabular outputs in lightweight settings for embedded data workflows.

Limitations

  1. Hardware Requirements (Minimal but Non-Zero):
    While designed for edge use, optimal performance still benefits from mid-range NPUs, GPUs, or specialized accelerators.

  2. Knowledge Cutoff & Real-Time Awareness:
    No ability to fetch live data or respond to real-time information beyond its training snapshot.

  3. Limited Creative Output:
    Less effective for creative writing, abstract thinking, or tasks requiring deep imagination.

  4. Prompt Sensitivity:
    Outputs can vary based on prompt clarity; structured prompts yield better, more predictable results.

  5. Inherited Biases:
    May reflect biases from pretraining data. Use caution in sensitive or high-stakes domains.

Downloads last month
16
Safetensors
Model size
1.78B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Octantis-QwenR1-1.5B

Finetuned
(1)
this model
Quantizations
3 models

Collection including prithivMLmods/Octantis-QwenR1-1.5B