|
--- |
|
license: mit |
|
datasets: |
|
- xl-zhao/PromptCoT-QwQ-Dataset |
|
language: |
|
- en |
|
base_model: |
|
- Qwen/QwQ-32B |
|
--- |
|
# **PromptCoT: Synthesizing Olympiad-Level Problems for Mathematical Reasoning in Large Language Models** |
|
|
|
[](http://arxiv.org/abs/2503.02324) |
|
[](https://github.com/zhaoxlpku/PromptCoT) |
|
|
|
--- |
|
|
|
## π **Overview** |
|
The **PromptCoT-QwQ-32B** model is a distilled mathematical reasoning model trained on **more challenging problem sets generated by the PromptCoT pipeline**. Built upon the **QwQ-32B**, it leverages an enhanced training dataset specifically designed to strengthen mathematical reasoning capabilities. |
|
|
|
For more details, refer to our **paper on ArXiv**: [π PromptCoT: Synthesizing Olympiad-Level Problems for Mathematical Reasoning in Large Language Models](http://arxiv.org/abs/2503.02324). |
|
|
|
--- |
|
|
|
## π State-of-the-Art Performance |
|
**PromptCoT-QwQ-32B** has achieved remarkable results, outperforming all competitors across key benchmarks focused on mathematical reasoning: |
|
|
|
| **Model** | **GSM8K** | **MATH-500** | **AIME2024** | **AIME2025** | |
|
| --- | --- | --- | --- | --- | |
|
| **S1-32B** | - | 93.0% | 56.7% | 26.6% | |
|
| **LIMO-32B** | - | 94.8% | 57.1% | 46.6% | |
|
| **QwQ-32B** | - | - | 82.1% | 70.8% | |
|
| **PromptCoT-QwQ-32B** (**ours**) | π₯ **96.4% Β± 0.2%** | π₯ **96.7% Β± 0.5%** | π₯ **83.8% Β± 2.8%** | π₯ **75.4% Β± 4.7%** | |
|
|
|
|
|
## π₯ **Quick Start: Using the Model** |
|
|
|
### **1οΈβ£ Install Dependencies** |
|
```bash |
|
pip install transformers vllm torch accelerate |
|
``` |
|
|
|
### **2οΈβ£ Load the Model with Hugging Face Transformers** |
|
You can use **PromptCoT-QwQ-32B** to solve **mathematical problems** using Hugging Faceβs `generate` API: |
|
|
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
model_name = "xl-zhao/PromptCoT-QwQ-32B" |
|
tokenizer = AutoTokenizer.from_pretrained(model_name) |
|
model = AutoModelForCausalLM.from_pretrained(model_name).to("cuda") |
|
|
|
problem_statement = ( |
|
"A robe takes 2 bolts of blue fiber and half that much white fiber. How many bolts in total does it take?" |
|
) |
|
|
|
prompt = ( |
|
f"<|im_start|>user\n{problem_statement}\nPlease reason step by step, and put your final answer within \\boxed{{}}.<|im_end|>\n" |
|
"<|im_start|>assistant\n" |
|
) |
|
|
|
inputs = tokenizer(prompt, return_tensors="pt").to("cuda") |
|
|
|
with torch.no_grad(): |
|
output = model.generate(**inputs, max_length=32768, temperature=0.6) |
|
|
|
generated_solution = tokenizer.decode(output[0], skip_special_tokens=True) |
|
print(generated_solution) |
|
``` |
|
|
|
--- |
|
|
|
## β‘ **Using vLLM for Fast Inference** |
|
For optimized inference, use `vLLM`: |
|
```python |
|
from vllm import LLM, SamplingParams |
|
|
|
model_name = "xl-zhao/PromptCoT-QwQ-32B" |
|
llm = LLM(model=model_name, tensor_parallel_size=1) |
|
|
|
problem_statement = ( |
|
"A robe takes 2 bolts of blue fiber and half that much white fiber. How many bolts in total does it take?" |
|
) |
|
|
|
prompt = ( |
|
f"<|im_start|>user\n{problem_statement}\nPlease reason step by step, and put your final answer within \\boxed{{}}.<|im_end|>\n" |
|
"<|im_start|>assistant\n" |
|
) |
|
|
|
sampling_params = SamplingParams(temperature=0.6, max_tokens=32768) |
|
outputs = llm.generate([prompt], sampling_params) |
|
|
|
print(outputs[0].outputs[0].text) |
|
``` |
|
|
|
--- |
|
|
|
## π **Full Usage & Advanced Options** |
|
For advanced usage, including batch inference and evaluation on mathematical benchmarks, refer to the **full repository on GitHub**: |
|
πΉ [GitHub: PromptCoT](https://github.com/zhaoxlpku/PromptCoT) |
|
|
|
--- |
|
|
|
## π **Citation** |
|
If you use **PromptCoT**, please consider citing: |
|
``` |
|
@article{zhao2025promptcot, |
|
author = {Zhao, Xueliang and Wu, Wei and Guan, Jian and Kong, Lingpeng}, |
|
title = {PromptCoT: Synthesizing Olympiad-Level Problems for Mathematical Reasoning in Large Language Models}, |
|
year = {2025}, |
|
journal = {arXiv preprint arXiv:2503.02324}, |
|
url = {http://arxiv.org/abs/2503.02324} |
|
} |
|
``` |