Datasets:

Modalities:
Text
Formats:
json
ArXiv:
Libraries:
Datasets
pandas
License:

Add link to paper, task category

#3
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +26 -12
README.md CHANGED
@@ -1,11 +1,14 @@
1
  ---
2
  license: mit
 
 
3
  configs:
4
  - config_name: orz_math_72k_collection_extended
5
  data_files:
6
  - split: train
7
  path: orz_math_72k_collection_extended.json
8
  ---
 
9
  <div align="center">
10
 
11
  # Open Reasoner Zero
@@ -29,7 +32,7 @@ An Open Source Approach to Scaling Up Reinforcement Learning on the Base Model
29
  src="https://img.shields.io/badge/Notion-%23000000.svg?style=for-the-badge&logo=notion&logoColor=white"/></a>
30
 
31
  <br>
32
- <a href="https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/blob/main/ORZ_paper.pdf"><b>Paper PDF Link [WIP]</b>πŸ‘οΈ</a>
33
  </div>
34
 
35
  <div>
@@ -39,10 +42,11 @@ An Open Source Approach to Scaling Up Reinforcement Learning on the Base Model
39
 
40
  ## Overview 🌊
41
  We introduce **Open-Reasoner-Zero**, the first open source implementation of large-scale reasoning-oriented RL training focusing on scalability, simplicity and accessibility.
 
42
 
43
  To enable broader participation in this pivotal moment we witnessed and accelerate research towards artificial general intelligence (AGI),
44
  we release our source code, parameter settings, training data, and model weights.
45
- Please refer to our [paper](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/blob/main/ORZ_paper.pdf) for more insights across various model sizes.
46
 
47
  **Let the Reasoner-Zero tide rise!**
48
 
@@ -61,7 +65,7 @@ Please refer to our [paper](https://github.com/Open-Reasoner-Zero/Open-Reasoner-
61
  <strong>[2025/03/31]</strong>
62
  We announce a major milestone for `Open-Reasoner-Zero`:
63
 
64
- - 🌊 [Updated Paper](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/blob/main/ORZ_paper.pdf) with new results.
65
  - πŸ”­ [Easy-to-use Training Scripts](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/tree/main/playground):
66
  - [ORZ-1.5B training scripts](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/blob/main/playground/orz_1p5b_ppo.py) and [ORZ-0.5B training scripts](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/blob/main/playground/orz_0p5b_ppo.py) (main results in Figure 2).
67
  - [Minimal resource training scripts](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/blob/main/playground/orz_0p5b_ppo_1gpu.py): ORZ-0.5B can be run on a single A800/H800 gpu!
@@ -80,7 +84,7 @@ We announce a major milestone for `Open-Reasoner-Zero`:
80
  We release `Open-Reasoner-Zero`.
81
 
82
  As part of this release, we open-source:
83
- - 🌊 [Paper](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/blob/main/ORZ_paper.pdf) on our comprehensive analysis and insights in Reasoner-Zero training
84
  - πŸ€— HF Model [`Open-Reasoner-Zero-7B`](https://huggingface.co/Open-Reasoner-Zero/Open-Reasoner-Zero-7B) and [`Open-Reasoner-Zero-32B`](https://huggingface.co/Open-Reasoner-Zero/Open-Reasoner-Zero-32B)
85
  - 🎁 [`Our curated 57k training data`](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/tree/main/data)
86
  - πŸ“„ [Training Scripts](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/tree/main/playground) to enjoy your own Reasoner-Zero journey!
@@ -99,7 +103,7 @@ We release all of curated high-quality training data in the [`data`](https://git
99
  * [extended 72k](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/blob/main/data/orz_math_72k_collection_extended.json), mainly cleaned from OpenR1-Math-220k.
100
  * [hard 13k](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/blob/main/data/orz_math_13k_collection_hard.json), mined from the first stage of ORZ-32B training.
101
 
102
- The details for how to collect data are described in our [paper](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/blob/main/ORZ_paper.pdf).
103
 
104
  ### Installation & Training Scripts
105
  We release our [Dockerfile](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/blob/main/docker/Dockerfile) in [docker](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/tree/main/docker) folder to facilitate the reproducibility of our training.
@@ -191,6 +195,14 @@ DEBUG_MODE=True python -m playground.orz_14m_ppo_mini
191
  DEBUG_MODE=True python -m playground.orz_7b_ppo
192
  ```
193
 
 
 
 
 
 
 
 
 
194
  ## Acknowledgements πŸ’–
195
 
196
  - This work was supported by computing resources and valuable feedback provided by [StepFun](https://www.stepfun.com/) and Tsinghua University.
@@ -214,11 +226,13 @@ We have several wechat groups to help discussions and sharing, you can scan the
214
  ## Citation
215
 
216
  ```bibtex
217
- @misc{OpenReasonerZero2025,
218
- title={Open-Reasoner-Zero: An Open Source Approach to Scaling Reinforcement Learning on the Base Model},
219
- author={Jingcheng Hu and Yinmin Zhang and Qi Han and Daxin Jiang and Xiangyu Zhang, Heung-Yeung Shum},
220
- year={2025},
221
- howpublished={\url{https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero}},
 
 
 
222
  }
223
- ```
224
-
 
1
  ---
2
  license: mit
3
+ task_categories:
4
+ - question-answering
5
  configs:
6
  - config_name: orz_math_72k_collection_extended
7
  data_files:
8
  - split: train
9
  path: orz_math_72k_collection_extended.json
10
  ---
11
+
12
  <div align="center">
13
 
14
  # Open Reasoner Zero
 
32
  src="https://img.shields.io/badge/Notion-%23000000.svg?style=for-the-badge&logo=notion&logoColor=white"/></a>
33
 
34
  <br>
35
+ <a href="https://arxiv.org/abs/2503.24290"><b>Paper Arxiv Link </b>πŸ‘οΈ</a>
36
  </div>
37
 
38
  <div>
 
42
 
43
  ## Overview 🌊
44
  We introduce **Open-Reasoner-Zero**, the first open source implementation of large-scale reasoning-oriented RL training focusing on scalability, simplicity and accessibility.
45
+ Using the same base model as DeepSeek-R1-Zero-Qwen-32B, our implementation achieves superior performance on AIME2024, MATH500, and the GPQA Diamond benchmark while demonstrating remarkable efficiencyβ€”requiring only a tenth of the training steps, compared to DeepSeek-R1-Zero pipeline.
46
 
47
  To enable broader participation in this pivotal moment we witnessed and accelerate research towards artificial general intelligence (AGI),
48
  we release our source code, parameter settings, training data, and model weights.
49
+ Please refer to our [paper](https://huggingface.co/papers/2503.24290) for more insights across various model sizes.
50
 
51
  **Let the Reasoner-Zero tide rise!**
52
 
 
65
  <strong>[2025/03/31]</strong>
66
  We announce a major milestone for `Open-Reasoner-Zero`:
67
 
68
+ - 🌊 [Updated Paper](https://arxiv.org/abs/2503.24290) with new results.
69
  - πŸ”­ [Easy-to-use Training Scripts](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/tree/main/playground):
70
  - [ORZ-1.5B training scripts](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/blob/main/playground/orz_1p5b_ppo.py) and [ORZ-0.5B training scripts](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/blob/main/playground/orz_0p5b_ppo.py) (main results in Figure 2).
71
  - [Minimal resource training scripts](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/blob/main/playground/orz_0p5b_ppo_1gpu.py): ORZ-0.5B can be run on a single A800/H800 gpu!
 
84
  We release `Open-Reasoner-Zero`.
85
 
86
  As part of this release, we open-source:
87
+ - 🌊 [Paper(WIP)](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/blob/main/ORZ_paper.pdf) on our comprehensive analysis and insights in Reasoner-Zero training
88
  - πŸ€— HF Model [`Open-Reasoner-Zero-7B`](https://huggingface.co/Open-Reasoner-Zero/Open-Reasoner-Zero-7B) and [`Open-Reasoner-Zero-32B`](https://huggingface.co/Open-Reasoner-Zero/Open-Reasoner-Zero-32B)
89
  - 🎁 [`Our curated 57k training data`](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/tree/main/data)
90
  - πŸ“„ [Training Scripts](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/tree/main/playground) to enjoy your own Reasoner-Zero journey!
 
103
  * [extended 72k](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/blob/main/data/orz_math_72k_collection_extended.json), mainly cleaned from OpenR1-Math-220k.
104
  * [hard 13k](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/blob/main/data/orz_math_13k_collection_hard.json), mined from the first stage of ORZ-32B training.
105
 
106
+ The details for how to collect data are described in our [paper](https://arxiv.org/abs/2503.24290).
107
 
108
  ### Installation & Training Scripts
109
  We release our [Dockerfile](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/blob/main/docker/Dockerfile) in [docker](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/tree/main/docker) folder to facilitate the reproducibility of our training.
 
195
  DEBUG_MODE=True python -m playground.orz_7b_ppo
196
  ```
197
 
198
+ ### How to Use the Model
199
+ #### Policy Model
200
+ Policy models can be used in the same way as any chat model in transformers and vllm, since we have put the chat template jinja in the tokenizer.
201
+
202
+ #### Critic Model
203
+ Critic models can be loaded the same way like in the [training code](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/blob/main/orz/ppo/actors.py#L738).
204
+
205
+
206
  ## Acknowledgements πŸ’–
207
 
208
  - This work was supported by computing resources and valuable feedback provided by [StepFun](https://www.stepfun.com/) and Tsinghua University.
 
226
  ## Citation
227
 
228
  ```bibtex
229
+ @misc{hu2025openreasonerzeroopensourceapproach,
230
+ title={Open-Reasoner-Zero: An Open Source Approach to Scaling Up Reinforcement Learning on the Base Model},
231
+ author={Jingcheng Hu and Yinmin Zhang and Qi Han and Daxin Jiang and Xiangyu Zhang and Heung-Yeung Shum},
232
+ year={2025},
233
+ eprint={2503.24290},
234
+ archivePrefix={arXiv},
235
+ primaryClass={cs.LG},
236
+ url={https://arxiv.org/abs/2503.24290},
237
  }
238
+ ```