Fast-OpenMath-Nemotron-14B

By applying SFT and GRPO on difficult math problems, we enhanced the performance of DeepSeek-R1-Distill-Qwen-14B and developed Fast-Math-R1-14B, which achieves approx. 30% faster inference on average, while maintaining accuracy.

In addition, we trained and open-sourced Fast-OpenMath-Nemotron-14B, an efficiency-optimized version of NVIDIA’s OpenMath-Nemotron-14B, following the same approach. Compared to OpenMath-Nemotron-14B, this model enables approx. 30% faster inference on average, with minimal loss in performance.

Technical details can be found in our github repository.

Note: This model likely inherits the ability to perform inference in TIR mode from the original model. However, all of our experiments were conducted in CoT mode, and its performance in TIR mode has not been evaluated.

Evaluation

AIME 2024 AIME 2025
Model Token budget Pass@1 (avg. 64) Mean output tokens Pass@1 (avg. 64) Mean output tokens
OpenMath-Nemotron-14B 32000 76.2 11493 64.5 13414
24000 75.4 11417 63.4 13046
16000 66 10399 54.2 11422
12000 55 9053 40 9609
8000 36 6978 27.2 7083
Fast-OpenMath-Nemotron-14B 32000 70.7 9603 61.4 11424
24000 70.6 9567 60.9 11271
16000 66.6 8954 55.3 10190
12000 59.4 7927 45.6 8752
8000 47.6 6282 33.8 6589

Inference

vLLM

from vllm import LLM, SamplingParams
from transformers import AutoTokenizer


model_path = 'RabotniKuma/Fast-OpenMath-Nemotron-14B'
vllm_engine = LLM(
    model=model_path,
    max_model_len=8192,
    gpu_memory_utilization=0.9,
    trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_path)


sampling_params = SamplingParams(
    temperature=1.0,
    top_p=0.90,
    min_p=0.05,
    max_tokens=8192,
    stop='</think>',  # For even faster inference, applying early stopping at the </think> tag and extracting the final boxed content is recommended.
)
messages = [
    {
        'role': 'user', 
        'content': (
            'Solve the problem, and put the answer in \boxed{{}}. '
            'Sarah is twice as old as her youngest brother. If the difference between their ages is 15 years. How old is her youngest brother?'
        )
    }
]
messages = tokenizer.apply_chat_template(
    conversation=messages,
    tokenize=False,
    add_generation_prompt=True
)
response = vllm_engine.generate(messages, sampling_params=sampling_params)
Downloads last month
22
Safetensors
Model size
14.8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for RabotniKuma/Fast-OpenMath-Nemotron-14B

Base model

Qwen/Qwen2.5-14B
Finetuned
(1)
this model
Quantizations
2 models

Collection including RabotniKuma/Fast-OpenMath-Nemotron-14B