Castula-U2-QwenRe-1.5B
Castula-U2-QwenRe-1.5B is a compact, multilingual reasoning model fine-tuned from Qwen-1.5B, excelling in mathematical problem solving, logical reasoning, code generation, and general-purpose tasks. Its step-by-step reasoning and bilingual fluency make it ideal for educational systems, coding assistants, and lightweight reasoning applications.
Key Features
Advanced Step-by-Step Reasoning
Fine-tuned to produce intermediate steps for math, logic, and code problems, offering transparency and interpretability crucial for education, coding help, and diagnostics.Multilingual Proficiency (English + Chinese)
Understands and solves problems in both English and Simplified Chinese, making it accessible in diverse learning and working environments.Compact Yet Versatile (1.5B Parameters)
Small enough for low-resource environments, yet capable of math, logical puzzles, basic coding tasks, and general comprehension, balancing performance and efficiency.Structured Computation & Problem Solving
Mirrors human-like multi-step problem-solving, making solutions easy to follow, debug, or verify.
Quickstart with Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Castula-U2-QwenRe-1.5B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Solve: A train travels 180 km in 3 hours. What is its average speed?"
messages = [
{"role": "system", "content": "You are a helpful tutor skilled in solving math, logic, and code problems with step-by-step explanations."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
Intended Use
- Math & Logic Tutoring: Solves problems with explanations ideal for students and educators.
- Code Assistant: Helps with beginner-to-intermediate code generation and understanding.
- Bilingual Apps: Educational tools in English and Chinese for a global audience.
- Lightweight Reasoning Systems: Deployable in mobile apps, browser extensions, and edge devices.
Limitations
Domain Specialization:
Best in math, logic, and code. Performance may degrade in highly creative or abstract language tasks.Compact Scale:
While efficient, may underperform larger models in deeply complex reasoning or long-context tasks.Inherited Bias:
May reflect biases from the base model (Qwen-1.5B); outputs should be verified for sensitive or critical uses.Prompt Sensitivity:
Structured, clearly stated inputs produce significantly better outputs.
- Downloads last month
- 12