redmoe-ai-v1 commited on
Commit
9552c16
·
verified ·
1 Parent(s): c0969ad

Upload folder using huggingface_hub

Browse files
Files changed (4) hide show
  1. README.md +0 -193
  2. config.json +1 -2
  3. generation_config.json +1 -1
  4. tokenizer_config.json +2 -2
README.md CHANGED
@@ -1,193 +0,0 @@
1
- ---
2
- license: mit
3
- license_link: https://huggingface.co/rednote-hilab/dots.llm1.inst/blob/main/LICENSE
4
- pipeline_tag: text-generation
5
- base_model: rednote-hilab/dots.llm1.base
6
- tags:
7
- - chat
8
- library_name: transformers
9
- language:
10
- - en
11
- - zh
12
- ---
13
-
14
- # dots1
15
-
16
- <p align="center">
17
- <img src="figures/new_logo.png" width="200"/>
18
- <p>
19
-
20
- <p align="center">
21
- &nbsp&nbsp🤗 <a href="https://huggingface.co/rednote-hilab">Hugging Face</a>&nbsp&nbsp | &nbsp&nbsp 📑 <a href="https://github.com/rednote-hilab/dots.llm1/blob/main/dots1_tech_report.pdf">Paper</a> &nbsp&nbsp
22
- <br>
23
- 🖥️ <a href="https://huggingface.co/spaces/rednote-hilab/dots-demo">Demo</a>&nbsp&nbsp | &nbsp&nbsp💬 <a href="figures/wechat.png">WeChat (微信)</a>&nbsp&nbsp | &nbsp&nbsp📕 <a href="https://www.xiaohongshu.com/user/profile/683ffe42000000001d021a4c">rednote</a>&nbsp&nbsp
24
- </p>
25
-
26
-
27
-
28
-
29
- Visit our Hugging Face (click links above), search checkpoints with names starting with `dots.llm1` or visit the [dots1 collection](https://huggingface.co/collections/rednote-hilab/dotsllm1-68246aaaaba3363374a8aa7c), and you will find all you need! Enjoy!
30
-
31
-
32
- ## News
33
-
34
- - 2025.06.06: We released the `dots.llm1` series. Check our [report](https://github.com/rednote-hilab/dots.llm1/blob/main/dots1_tech_report.pdf) for more details!
35
-
36
-
37
- ## 1. Introduction
38
-
39
-
40
- The `dots.llm1` model is a large-scale MoE model that activates 14B parameters out of a total of 142B parameters, delivering performance on par with state-of-the-art models.
41
- Leveraging our meticulously crafted and efficient data processing pipeline, `dots.llm1` achieves performance comparable to Qwen2.5-72B after pretrained on 11.2T high-quality tokens without synthetic data. To foster further research, we open-source intermediate training checkpoints at every one trillion tokens, providing valuable insights into the learning dynamics of large language models.
42
-
43
-
44
- <p align="center">
45
- <img width="90%" src="./figures/performance.png">
46
- </p>
47
-
48
- ## 2. Model Summary
49
-
50
- **This repo contains the base and instruction-tuned `dots.llm1` model**. which has the following features:
51
-
52
- - Type: A MoE model with 14B activated and 142B total parameters trained on 11.2T tokens.
53
- - Training Stages: Pretraining and SFT.
54
- - Architecture: Multi-head Attention with QK-Norm in attention Layer, fine-grained MoE utilizing top-6 out of 128 routed experts, plus 2 shared experts.
55
- - Number of Layers: 62
56
- - Number of Attention Heads: 32
57
- - Supported Languages: English, Chinese
58
- - Context Length: 32,768 tokens
59
- - License: MIT
60
-
61
- The highlights from `dots.llm1` include:
62
-
63
- - **Enhanced Data Processing**: We propose a scalable and fine-grained *three-stage* data processing framework designed to generate large-scale, high-quality and diverse data for pretraining.
64
- - **No Synthetic Data during Pretraining**: *11.2 trillion* high-quality non-synthetic tokens was used in base model pretraining.
65
- - **Performance and Cost Efficiency**: `dots.llm1` is an open-source model that activates only *14B* parameters at inference, delivering both comprehensive capabilities and high computational efficiency.
66
- - **Infrastructure**: We introduce an innovative MoE all-to-all communication and computation overlapping recipe based on interleaved 1F1B pipeline scheduling and an efficient grouped GEMM implementation to boost computational efficiency.
67
- - **Open Accessibility to Model Dynamics**: Intermediate model checkpoints for *every 1T tokens* trained are released, facilitating future research into the learning dynamics of large language models.
68
-
69
- ## 3. Example Usage
70
-
71
- ### Model Downloads
72
-
73
- <div align="center">
74
-
75
- | **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download Link** |
76
- | :------------: | :------------: | :------------: | :------------: | :------------: |
77
- | dots.llm1.base | 142B | 14B | 32K | [🤗 Hugging Face](https://huggingface.co/rednote-hilab/dots.llm1.base) |
78
- | dots.llm1.inst | 142B | 14B | 32K | [🤗 Hugging Face](https://huggingface.co/rednote-hilab/dots.llm1.inst) |
79
-
80
- </div>
81
-
82
- ### Docker (recommended)
83
-
84
-
85
- The docker images are available on [Docker Hub](https://hub.docker.com/repository/docker/rednotehilab/dots1/tags), based on the official images.
86
-
87
- You can start a server via vllm.
88
-
89
- ```shell
90
- docker run --gpus all \
91
- -v ~/.cache/huggingface:/root/.cache/huggingface \
92
- -p 8000:8000 \
93
- --ipc=host \
94
- rednotehilab/dots1:vllm-openai-v0.9.0.1 \
95
- --model rednote-hilab/dots.llm1.inst \
96
- --tensor-parallel-size 8 \
97
- --trust-remote-code \
98
- --served-model-name dots1
99
- ```
100
-
101
- Then you can verify whether the model is running successfully in the following way.
102
-
103
- ```shell
104
- curl http://localhost:8000/v1/chat/completions \
105
- -H "Content-Type: application/json" \
106
- -d '{
107
- "model": "dots1",
108
- "messages": [
109
- {"role": "system", "content": "You are a helpful assistant."},
110
- {"role": "user", "content": "Who won the world series in 2020?"}
111
- ],
112
- "max_tokens": 32,
113
- "temperature": 0
114
- }'
115
- ```
116
-
117
-
118
- ### Inference with huggingface
119
-
120
- #### Text Completion
121
-
122
- ```python
123
- import torch
124
- from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
125
-
126
- model_name = "rednote-hilab/dots.llm1.base"
127
- tokenizer = AutoTokenizer.from_pretrained(model_name)
128
-
129
- model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", torch_dtype=torch.bfloat16, attn_implementation="eager")
130
- model.generation_config = GenerationConfig.from_pretrained(model_name)
131
-
132
- text = "An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is"
133
- inputs = tokenizer(text, return_tensors="pt")
134
- outputs = model.generate(**inputs.to(model.device), max_new_tokens=100)
135
- result = tokenizer.decode(outputs[0], skip_special_tokens=True)
136
- print(result)
137
- ```
138
-
139
- #### Chat Completion
140
-
141
- ```python
142
- import torch
143
- from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
144
-
145
- model_name = "rednote-hilab/dots.llm1.inst"
146
- tokenizer = AutoTokenizer.from_pretrained(model_name)
147
-
148
- model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", torch_dtype=torch.bfloat16, attn_implementation="eager")
149
- model.generation_config = GenerationConfig.from_pretrained(model_name)
150
-
151
- messages = [
152
- {"role": "user", "content": "Write a piece of quicksort code in C++"}
153
- ]
154
- input_tensor = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt")
155
- outputs = model.generate(input_tensor.to(model.device), max_new_tokens=200)
156
-
157
- result = tokenizer.decode(outputs[0][input_tensor.shape[1]:], skip_special_tokens=True)
158
- print(result)
159
- ```
160
-
161
-
162
- ### Inference with sglang
163
- [SGLang](https://github.com/sgl-project/sglang) is a fast serving framework for large language models and vision language models. SGLang could be used to launch a server with OpenAI-compatible API service. `sglang>=***` is required. It is as easy as
164
-
165
- ```shell
166
- python -m sglang.launch_server --model-path dots.llm1.inst --tp 8 --host 0.0.0.0 --port 8000
167
- ```
168
- An OpenAI-compatible API will be available at `http://localhost:8000/v1`.
169
-
170
- ### Inference with vllm
171
- [vLLM](https://github.com/vllm-project/vllm) is a high-throughput and memory-efficient inference and serving engine for LLMs. `vllm>=***` is recommended.
172
-
173
- ```shell
174
- vllm serve dots.llm1.inst --port 8000 --tensor-parallel-size 8
175
- ```
176
- An OpenAI-compatible API will be available at `http://localhost:8000/v1`.
177
-
178
- ## 4. Evaluation Results
179
-
180
- Detailed evaluation results are reported in this [📑 report](https://github.com/rednote-hilab/dots.llm1/blob/main/dots1_tech_report.pdf).
181
-
182
- ## Citation
183
-
184
- If you find `dots.llm1` is useful or want to use in your projects, please kindly cite our paper:
185
-
186
- ```
187
- @article{dots1,
188
- title={dots.llm1 Technical Report},
189
- author={rednote-hilab},
190
- journal={arXiv preprint arXiv:TBD},
191
- year={2025}
192
- }
193
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
config.json CHANGED
@@ -5,7 +5,7 @@
5
  "attention_bias": false,
6
  "attention_dropout": 0.0,
7
  "bos_token_id": null,
8
- "eos_token_id": 151643,
9
  "first_k_dense_replace": 1,
10
  "hidden_act": "silu",
11
  "hidden_size": 4096,
@@ -14,7 +14,6 @@
14
  "max_position_embeddings": 32768,
15
  "model_type": "dots1",
16
  "moe_intermediate_size": 1408,
17
- "moe_layer_freq": 1,
18
  "n_routed_experts": 128,
19
  "n_shared_experts": 2,
20
  "norm_topk_prob": true,
 
5
  "attention_bias": false,
6
  "attention_dropout": 0.0,
7
  "bos_token_id": null,
8
+ "eos_token_id": 151645,
9
  "first_k_dense_replace": 1,
10
  "hidden_act": "silu",
11
  "hidden_size": 4096,
 
14
  "max_position_embeddings": 32768,
15
  "model_type": "dots1",
16
  "moe_intermediate_size": 1408,
 
17
  "n_routed_experts": 128,
18
  "n_shared_experts": 2,
19
  "norm_topk_prob": true,
generation_config.json CHANGED
@@ -1,6 +1,6 @@
1
  {
2
  "_from_model_config": true,
3
  "bos_token_id": null,
4
- "eos_token_id": 151643,
5
  "transformers_version": "4.46.3"
6
  }
 
1
  {
2
  "_from_model_config": true,
3
  "bos_token_id": null,
4
+ "eos_token_id": 151645,
5
  "transformers_version": "4.46.3"
6
  }
tokenizer_config.json CHANGED
@@ -134,10 +134,10 @@
134
  "bos_token": null,
135
  "chat_template": "{% if messages[0]['role'] == 'system' %}<|system|>{{ messages[0]['content'] }}<|endofsystem|>{% set start_idx = 1 %}{% else %}<|system|><|endofsystem|>{% set start_idx = 0 %}{% endif %}{% for idx in range(start_idx, messages|length) %}{% if messages[idx]['role'] == 'user' %}<|userprompt|>{{ messages[idx]['content'] }}<|endofuserprompt|>{% elif messages[idx]['role'] == 'assistant' %}<|response|>{{ messages[idx]['content'] }}<|endofresponse|>{% endif %}{% endfor %}{% if add_generation_prompt and messages[-1]['role'] == 'user' %}<|response|>{% endif %}",
136
  "clean_up_tokenization_spaces": false,
137
- "eos_token": "<|endoftext|>",
138
  "errors": "replace",
139
  "model_max_length": 32768,
140
- "pad_token": "<|endoftext|>",
141
  "split_special_tokens": false,
142
  "tokenizer_class": "Qwen2Tokenizer",
143
  "unk_token": null
 
134
  "bos_token": null,
135
  "chat_template": "{% if messages[0]['role'] == 'system' %}<|system|>{{ messages[0]['content'] }}<|endofsystem|>{% set start_idx = 1 %}{% else %}<|system|><|endofsystem|>{% set start_idx = 0 %}{% endif %}{% for idx in range(start_idx, messages|length) %}{% if messages[idx]['role'] == 'user' %}<|userprompt|>{{ messages[idx]['content'] }}<|endofuserprompt|>{% elif messages[idx]['role'] == 'assistant' %}<|response|>{{ messages[idx]['content'] }}<|endofresponse|>{% endif %}{% endfor %}{% if add_generation_prompt and messages[-1]['role'] == 'user' %}<|response|>{% endif %}",
136
  "clean_up_tokenization_spaces": false,
137
+ "eos_token": "<|endofresponse|>",
138
  "errors": "replace",
139
  "model_max_length": 32768,
140
+ "pad_token": "<|endofresponse|>",
141
  "split_special_tokens": false,
142
  "tokenizer_class": "Qwen2Tokenizer",
143
  "unk_token": null