library_name: transformers
license: apache-2.0
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
pipeline_tag: text-generation
language:
- en
tags:
- text-generation-inference
- RL
- Math
- Code
- Reasoning
Fomalhaut-QwenR-1.5B
Fomalhaut-QwenR-1.5B is a language model fine-tuned from DeepSeek-R1-Distilled-Qwen-1.5B using distributed reinforcement learning (RL). This version enhances capabilities in mathematical reasoning, coding ability, and error correction, delivering efficient general-purpose reasoning and intelligent assistance in a lightweight 1.5B parameter architecture.
Key Improvements
Mathematical Reasoning Enhancements:
Equipped with advanced capabilities in handling mathematical logic, symbolic computation, step-by-step problem-solving, and numerical accuracy across topics from basic arithmetic to higher-order mathematics.Coding and Debugging Proficiency:
Improved performance in code generation, understanding documentation, and identifying and correcting bugs in multiple programming languages, especially Python, JavaScript, and C++. It supports functional, object-oriented, and scripting paradigms.Intelligent Error Correction:
Capable of identifying inconsistencies or errors in logical reasoning, structured formats (JSON, XML), and code outputs, with suggestions and auto-corrections.Enhanced Instruction Following:
Fine-tuned for following complex, nested instructions with increased precision and coherence over extended prompts and interactions.Long-Context Support:
Supports up to 128K tokens for input context and can generate up to 8K tokens in one output, making it well-suited for extended problem solving, document generation, and analysis.
Quickstart with Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Fomalhaut-QwenR-1.5B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Explain the difference between breadth-first search and depth-first search with Python code examples."
messages = [
{"role": "system", "content": "You are a knowledgeable assistant skilled in reasoning, coding, and explanation."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
Intended Use
Mathematics and Computation:
Effective for solving math problems, verifying formulas, symbolic logic, algebraic reasoning, and analytical computations.Programming Assistance:
Ideal for generating, explaining, and debugging code. Suitable for both learning and software development use cases.Educational and Informational Support:
Provides accurate, well-explained answers to conceptual and applied questions in STEM and humanities.Conversational AI and Reasoning Agents:
Designed for intelligent chatbots capable of nuanced reasoning, error correction, and structured dialogue.Multilingual & Global Applications:
Useful for translation, multilingual support bots, and cross-lingual content generation.Long-Form & Structured Content Generation:
Can create long documents, reports, and structured outputs like JSON, Markdown, and tabular formats.
Limitations
Hardware Requirements:
While lighter than 14B models, it still benefits from modern GPUs/TPUs for inference due to long-context handling.Real-Time Limitations:
No real-time awareness; knowledge is limited to training data.Bias and Hallucination:
While reduced, some bias and hallucinations from training data may persist.Creative Consistency:
Variability in outputs for creative or ambiguous queries (e.g., fiction, storytelling).Prompt Sensitivity:
Results may vary significantly depending on the structure and clarity of the input prompt.