tclf90 commited on
Commit
5b9a8f6
·
verified ·
1 Parent(s): 5f201cc

Delete .ipynb_checkpoints

Browse files
.ipynb_checkpoints/README-checkpoint.md DELETED
@@ -1,195 +0,0 @@
1
- ---
2
- library_name: transformers
3
- license: mit
4
- pipeline_tag: text-generation
5
- tags:
6
- - DeepSeek-R1-0528
7
- - GPTQ
8
- - Int4-Int8Mix
9
- - 量化修复
10
- - vLLM
11
- base_model:
12
- - deepseek-ai/DeepSeek-R1-0528
13
- base_model_relation: quantized
14
- ---
15
- # DeepSeek-R1-0528-GPTQ-Int4-Int8Mix-Compact
16
- Base mode [deepseek-ai/DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528)
17
-
18
- This repository contains a mixed-precision (Int4 + selective Int8) GPTQ version of DeepSeek-R1-0528 for vLLM. We began with a standard 4-bit (AWQ/GPTQ) conversion that follows vLLM’s default quantization layout, but early tests showed that a fully-Int4 model could not meet the compute demands of this checkpoint and may produce unstable outputs.
19
-
20
- Guided by this preliminary analysis, we introduced targeted, per-layer Int8 refinement: only the layers most sensitive to quantization are stored in lnt8 (the compadct version has more int8 layers), while the rest remain Int4. This keeps the file-size increase minimal compared with the pure 4-bit baseline while restoring response quality.
21
-
22
- Currently, vllm==0.9.0 does not support per-layer quantization settings for the moe module. I've provided a basic implementation by adding the get_moe_quant_method function within the gptq_marlin.py file. Before the PR is merged, please replace the corresponding file with the attached one.
23
-
24
- ### 【Model Update Date】
25
- ```
26
- 2025-05-31
27
- 1. fast commit
28
- ```
29
-
30
- ### 【Dependencies】
31
-
32
- ```
33
- vllm==0.9.0
34
- transformers==4.52.3
35
- ```
36
-
37
- <div style="
38
- background: rgba(255, 193, 61, 0.15);
39
- padding: 16px;
40
- border-radius: 6px;
41
- border: 1px solid rgba(255, 165, 0, 0.3);
42
- margin: 16px 0;
43
- ">
44
- ### 【💡Notes on New VLLM Versions💡】
45
-
46
- #### 1. Recommend Using V0 Inference Mode
47
- Before launching vLLM, set the environment variable
48
- ```
49
- export VLLM_USE_V1=0
50
- ```
51
- </div>
52
-
53
- <div style="
54
- background: rgba(255, 0, 200, 0.15);
55
- padding: 16px;
56
- border-radius: 6px;
57
- border: 1px solid rgba(255, 0, 200, 0.3);
58
- margin: 16px 0;
59
- ">
60
- ### 【💡 Patch for gptq_marlin.py💡】
61
-
62
- At present, vllm==0.9.0 lacks support for per-layer quantization configurations for the moe module, which will lead to errors when loading the model.
63
- I have implemented a simple fix by adding the get_moe_quant_method function to the gptq_marlin.py file.
64
-
65
- Until the PR is merged, please replace the gptq_marlin.py file in your installation with the attached version, placing it at:
66
- ```
67
- .../site-packages/vllm/model_executor/layers/quantization/gptq_marlin.py
68
- ```
69
-
70
- </div>
71
-
72
-
73
- ### 【Model List】
74
-
75
- | FILE SIZE | LATEST UPDATE TIME |
76
- |---------|--------------|
77
- | `414GB` | `2025-06-01` |
78
-
79
-
80
-
81
- ### 【Model Download】
82
-
83
- ```python
84
- from huggingface_hub import snapshot_download
85
- snapshot_download('QuantTrio/DeepSeek-R1-0528-GPTQ-Int4-Int8Mix-Lite', cache_dir="local_path")
86
- ```
87
-
88
-
89
- ## DeepSeek-R1-0528
90
- <!-- markdownlint-disable first-line-h1 -->
91
- <!-- markdownlint-disable html -->
92
- <!-- markdownlint-disable no-duplicate-header -->
93
-
94
- <div align="center">
95
- <img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" />
96
- </div>
97
- <hr>
98
- <div align="center" style="line-height: 1;">
99
- <a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;">
100
- <img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/>
101
- </a>
102
- <a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;">
103
- <img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20R1-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
104
- </a>
105
- <a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;">
106
- <img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
107
- </a>
108
- </div>
109
-
110
- <div align="center" style="line-height: 1;">
111
- <a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;">
112
- <img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
113
- </a>
114
- <a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;">
115
- <img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
116
- </a>
117
- <a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;">
118
- <img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
119
- </a>
120
- </div>
121
-
122
- <div align="center" style="line-height: 1;">
123
- <a href="LICENSE" style="margin: 2px;">
124
- <img alt="License" src="https://img.shields.io/badge/License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
125
- </a>
126
- </div>
127
-
128
-
129
- <p align="center">
130
- <a href="https://arxiv.org/pdf/2501.12948"><b>Paper Link</b>👁️</a>
131
- </p>
132
-
133
-
134
- ## 1. Introduction
135
-
136
- The DeepSeek R1 model has undergone a minor version upgrade, with the current version being DeepSeek-R1-0528. In the latest update, DeepSeek R1 has significantly improved its depth of reasoning and inference capabilities by leveraging increased computational resources and introducing algorithmic optimization mechanisms during post-training. The model has demonstrated outstanding performance across various benchmark evaluations, including mathematics, programming, and general logic. Its overall performance is now approaching that of leading models, such as O3 and Gemini 2.5 Pro.
137
-
138
- <p align="center">
139
- <img width="80%" src="figures/benchmark.png">
140
- </p>
141
-
142
- Compared to the previous version, the upgraded model shows significant improvements in handling complex reasoning tasks. For instance, in the AIME 2025 test, the model’s accuracy has increased from 70% in the previous version to 87.5% in the current version. This advancement stems from enhanced thinking depth during the reasoning process: in the AIME test set, the previous model used an average of 12K tokens per question, whereas the new version averages 23K tokens per question.
143
-
144
- Beyond its improved reasoning capabilities, this version also offers a reduced hallucination rate, enhanced support for function calling, and better experience for vibe coding.
145
-
146
- ## 2. Evaluation Results
147
-
148
- ### DeepSeek-R1-0528
149
- For all our models, the maximum generation length is set to 64K tokens. For benchmarks requiring sampling, we use a temperature of $0.6$, a top-p value of $0.95$, and generate 16 responses per query to estimate pass@1.
150
- <div align="center">
151
-
152
- | Category | Benchmark (Metric) | DeepSeek R1 | DeepSeek R1 0528
153
- |----------|----------------------------------|-----------------|---|
154
- | General |
155
- | | MMLU-Redux (EM) | 92.9 | 93.4
156
- | | MMLU-Pro (EM) | 84.0 | 85.0
157
- | | GPQA-Diamond (Pass@1) | 71.5 | 81.0
158
- | | SimpleQA (Correct) | 30.1 | 27.8
159
- | | FRAMES (Acc.) | 82.5 | 83.0
160
- | | Humanity's Last Exam (Pass@1) | 8.5 | 17.7
161
- | Code |
162
- | | LiveCodeBench (2408-2505) (Pass@1) | 63.5 | 73.3
163
- | | Codeforces-Div1 (Rating) | 1530 | 1930
164
- | | SWE Verified (Resolved) | 49.2 | 57.6
165
- | | Aider-Polyglot (Acc.) | 53.3 | 71.6
166
- | Math |
167
- | | AIME 2024 (Pass@1) | 79.8 | 91.4
168
- | | AIME 2025 (Pass@1) | 70.0 | 87.5
169
- | | HMMT 2025 (Pass@1) | 41.7 | 79.4 |
170
- | | CNMO 2024 (Pass@1) | 78.8 | 86.9
171
- | Tools |
172
- | | BFCL_v3_MultiTurn (Acc) | - | 37.0 |
173
- | | Tau-Bench (Pass@1) | - | 53.5(Airline)/63.9(Retail)
174
-
175
- </div>
176
- Note: We use Agentless framework to evaluate model performance on SWE-Verified. We only evaluate text-only prompts in HLE testsets. GPT-4.1 is employed to act user role in Tau-bench evaluation.
177
-
178
- ## 5. License
179
- This code repository is licensed under [MIT License](LICENSE). The use of DeepSeek-R1 models is also subject to [MIT License](LICENSE). DeepSeek-R1 series (including Base and Chat) supports commercial use and distillation.
180
-
181
- ## 6. Citation
182
- ```
183
- @misc{deepseekai2025deepseekr1incentivizingreasoningcapability,
184
- title={DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning},
185
- author={DeepSeek-AI},
186
- year={2025},
187
- eprint={2501.12948},
188
- archivePrefix={arXiv},
189
- primaryClass={cs.CL},
190
- url={https://arxiv.org/abs/2501.12948},
191
- }
192
- ```
193
-
194
- ## 7. Contact
195
- If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]).