Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,74 @@
|
|
1 |
-
---
|
2 |
-
license: cc-by-4.0
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: cc-by-4.0
|
3 |
+
base_model:
|
4 |
+
- nvidia/OpenMath-Nemotron-14B
|
5 |
+
---
|
6 |
+
|
7 |
+
# Fast-OpenMath-Nemotron-14B
|
8 |
+
By applying SFT and GRPO on difficult math problems, we enhanced the performance of `DeepSeek-R1-Distill-Qwen-14B` and developed [`Fast-Math-R1-14B`](https://huggingface.co/RabotniKuma/Fast-Math-R1-14B),
|
9 |
+
which achieves approx. 30% faster inference on average, while maintaining accuracy.
|
10 |
+
|
11 |
+
In addition, we trained and open-sourced `Fast-OpenMath-Nemotron-14B`, an efficiency-optimized version of NVIDIA’s [`OpenMath-Nemotron-14B`](https://huggingface.co/nvidia/OpenMath-Nemotron-14B), following the same approach.
|
12 |
+
Compared to OpenMath-Nemotron-14B, this model enables approx. 30% faster inference on average, with minimal loss in performance.
|
13 |
+
|
14 |
+
Technical details can be found in [our github repository](https://github.com/analokmaus/kaggle-aimo2-fast-math-r1/tree/master).
|
15 |
+
|
16 |
+
**Note:**
|
17 |
+
This model likely inherits the ability to perform inference in TIR mode from the original model. However, all of our experiments were conducted in CoT mode, and its performance in TIR mode has not been evaluated.
|
18 |
+
|
19 |
+
# Performance comparison
|
20 |
+
<img src='https://github.com/analokmaus/kaggle-aimo2-fast-math-r1/blob/master/assets/pass1_aime_all.png?raw=true' max-height='300px'>
|
21 |
+
<img src='https://github.com/analokmaus/kaggle-aimo2-fast-math-r1/blob/master/assets/pass1_aime_nemotron.png?raw=true' max-height='300px'>
|
22 |
+
|
23 |
+
| | | AIME 2024 | | AIME 2025 | |
|
24 |
+
| -------------------------- | ------------ | ---------------- | ------------------------------- | ---------------- | ------------------------------- |
|
25 |
+
| Model | Token budget | Pass@1 (avg. 64) | Output tokens | Pass@1 (avg. 64) | Output tokens |
|
26 |
+
| OpenMath-Nemotron-14B | 24000 | **73.3** | 12277 | **64.4** | 13027 |
|
27 |
+
| | 16384 | 66.4 | 8932 | 53.8 | 11547 |
|
28 |
+
| | 12800 | 57 | 7000 | 42.3 | 9996 |
|
29 |
+
| | 8192 | 37.4 | 4835 | 28 | 7186 |
|
30 |
+
| Fast-OpenMath-Nemotron-14B | 24000 | 71.7 | 10545 | 60.4 | 11053 |
|
31 |
+
| | 16384 | **68.2** | 8270 | **55.6** | 10216 |
|
32 |
+
| | 12800 | **62.3** | 6359 | **47.7** | 9052 |
|
33 |
+
| | 8192 | **47.6** | 4299 | **33.8** | 6674 |
|
34 |
+
|
35 |
+
# Inference
|
36 |
+
## vLLM
|
37 |
+
```python
|
38 |
+
from vllm import LLM, SamplingParams
|
39 |
+
from transformers import AutoTokenizer
|
40 |
+
|
41 |
+
|
42 |
+
model_path = 'RabotniKuma/Fast-OpenMath-Nemotron-14B'
|
43 |
+
vllm_engine = LLM(
|
44 |
+
model=model_path,
|
45 |
+
max_model_len=8192,
|
46 |
+
gpu_memory_utilization=0.9,
|
47 |
+
trust_remote_code=True,
|
48 |
+
)
|
49 |
+
tokenizer = AutoTokenizer.from_pretrained(model_path)
|
50 |
+
|
51 |
+
|
52 |
+
sampling_params = SamplingParams(
|
53 |
+
temperature=1.0,
|
54 |
+
top_p=0.90,
|
55 |
+
min_p=0.05,
|
56 |
+
max_tokens=8192,
|
57 |
+
stop='</think>', # For even faster inference, applying early stopping at the </think> tag and extracting the final boxed content is recommended.
|
58 |
+
)
|
59 |
+
messages = [
|
60 |
+
{
|
61 |
+
'role': 'user',
|
62 |
+
'content': (
|
63 |
+
'Solve the problem, and put the answer in \boxed{{}}. '
|
64 |
+
'Sarah is twice as old as her youngest brother. If the difference between their ages is 15 years. How old is her youngest brother?'
|
65 |
+
)
|
66 |
+
}
|
67 |
+
]
|
68 |
+
messages = tokenizer.apply_chat_template(
|
69 |
+
conversation=messages,
|
70 |
+
tokenize=False,
|
71 |
+
add_generation_prompt=True
|
72 |
+
)
|
73 |
+
response = vllm_engine.generate(messages, sampling_params=sampling_params)
|
74 |
+
```
|