Theta-35
Collection
Theta-35 is the advanced reasoning model in the Theta series by SVECTOR.
β’
3 items
β’
Updated
β’
1
A lightweight, high-efficiency reasoning model distilled from Theta-35. Theta-35-Mini is a compact 3B parameter language model developed by SVECTOR, built on the Qwen architecture and trained using Group Relative Policy Optimization (GRPO). It is the smaller sibling of our flagship Theta-35 model (33B parameters), offering efficient performance for resource-constrained environments.
Install dependencies:
pip install transformers
Run model in Python:
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("SVECTOR-CORPORATION/Theta-35-Mini")
model = AutoModelForCausalLM.from_pretrained("SVECTOR-CORPORATION/Theta-35-Mini")
# Prompt input
inputs = tokenizer("Once upon a time", return_tensors="pt")
# Generate output
outputs = model.generate(**inputs, max_length=100, temperature=0.7)
# Decode and print
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
This model is released under the MIT License.
π Visit us at svector.co.in