Installation
# If you are using Google Colab, you can skip this step.
# Universal
pip install torch transformers accelerate safetensors --upgrade
# CPU
pip install torch --index-url https://download.pytorch.org/whl/cpu
pip install transformers accelerate safetensors --upgrade
Script for running this model
# You can run in local with GPU or CPU.
# And you can run via Google Colab, for free.
# Open https://colab.research.google.com
# Connect to GPU T4.
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
from datetime import datetime
# Model
model_name = "hadadrjt/Qwen3-0.6B"
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# Get current date
current_date = datetime.now().strftime("%A, %B %d, %Y, %I:%M %p %Z")
# User input
input = "Who are you?" # Insert your message here.
# Build chat-style prompt
messages = [
{
"role": "system",
"content": (
"You are Qwen developed by Alibaba Cloud.\n"
"Your version is Qwen 3 with 0.6 billion parameters.\n"
"You have been fine-tuned by Hadad Darajat, to suit specific tasks.\n"
f"Current date: {current_date}"
)
},
{"role": "user", "content": input}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True
)
# Tokenize input
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# Streaming setup
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
# Generate response with streaming
with torch.inference_mode():
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768,
temperature=0.6,
top_p=0.95,
min_p=0.03,
top_k=64,
repetition_penalty=1.0,
streamer=streamer
)
Click here to run with Google Colab.
Changelogs
* 2025-05-08 *
- Initial fine-tuned version released.
- Datasets used:
- https://huggingface.co/datasets/unsloth/OpenMathReasoning-mini
- https://huggingface.co/datasets/mlabonne/FineTome-100k
* 2025-05-12 *
- 2nd fine-tuned version released.
- Datasets used:
- https://huggingface.co/datasets/sequelbox/Titanium2.1-DeepSeek-R1
- Downloads last month
- 224
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
1
Ask for provider support