shumingma commited on
Commit
32f217a
·
1 Parent(s): ef07e1d

update readme

Browse files
Files changed (3) hide show
  1. LICENSE +21 -0
  2. README.md +139 -1
  3. model.safetensors +0 -3
LICENSE ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ MIT License
2
+
3
+ Copyright (c) Microsoft Corporation.
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE
README.md CHANGED
@@ -1,3 +1,141 @@
1
  ---
2
- license: unknown
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: mit
3
+ license_link: https://huggingface.co/microsoft/bitnet-b1.58-2B-4T/blob/main/LICENSE
4
+ language:
5
+ - en
6
+ pipeline_tag: text-generation
7
+ tags:
8
+ - chat
9
+ - bitnet
10
+ - text-generation
11
+ - large-language-model
12
+ library_name: transformers
13
  ---
14
+
15
+ # BitNet b1.58 2B4T - Scaling Native 1-bit LLM
16
+
17
+ This repository contains the weights for **BitNet b1.58 2B4T**, the first open-source, native 1-bit Large Language Model (LLM) at the 2-billion parameter scale, developed by Microsoft Research.
18
+
19
+ Trained on a corpus of 4 trillion tokens, this model demonstrates that native 1-bit LLMs can achieve performance comparable to leading open-weight, full-precision models of similar size, while offering substantial advantages in computational efficiency (memory, energy, latency).
20
+
21
+ ➡️ **Technical Report:** [BitNet b1.58 2B4T Technical Report](https://arxiv.org)
22
+
23
+ ➡️ **Official Inference Code:** [microsoft/BitNet (bitnet.cpp)](https://github.com/microsoft/BitNet)
24
+
25
+ ## Model Variants
26
+
27
+ Several versions of the model weights are available on Hugging Face:
28
+
29
+ * [**`microsoft/bitnet-b1.58-2B-4T`**](https://huggingface.co/microsoft/bitnet-b1.58-2B-4T): Contains the packed 1.58-bit weights optimized for efficient inference. **Use this for deployment.**
30
+
31
+ * [**`microsoft/bitnet-b1.58-2B-4T-bf16`**](https://huggingface.co/microsoft/bitnet-b1.58-2B-4T-bf16) (This repository): Contains the master weights in BF16 format. **Use this only for training or fine-tuning purposes.**
32
+
33
+ * [**`microsoft/bitnet-b1.58-2B-4T-gguf`**](https://huggingface.co/microsoft/bitnet-b1.58-2B-4T-gguf): Contains the model weights in GGUF format, compatible with the `bitnet.cpp` library for CPU inference.
34
+
35
+ ## Model Details
36
+
37
+ * **Architecture:** Transformer-based, modified with `BitLinear` layers (BitNet framework).
38
+ * Uses Rotary Position Embeddings (RoPE).
39
+ * Uses squared ReLU (ReLU²) activation in FFN layers.
40
+ * Employs [`subln`](https://proceedings.mlr.press/v202/wang23u.html) normalization.
41
+ * No bias terms in linear or normalization layers.
42
+ * **Quantization:** Native 1.58-bit weights and 8-bit activations (W1.58A8).
43
+ * Weights are quantized to ternary values {-1, 0, +1} using absmean quantization during the forward pass.
44
+ * Activations are quantized to 8-bit integers using absmax quantization (per-token).
45
+ * **Crucially, the model was *trained from scratch* with this quantization scheme, not post-training quantized.**
46
+ * **Parameters:** ~2 Billion
47
+ * **Training Tokens:** 4 Trillion
48
+ * **Training Stages:**
49
+ 1. **Pre-training:** Large-scale training on public text/code and synthetic math data using a two-stage learning rate and weight decay schedule.
50
+ 2. **Supervised Fine-tuning (SFT):** Fine-tuned on instruction-following and conversational datasets using sum loss aggregation and specific hyperparameter tuning.
51
+ 3. **Direct Preference Optimization (DPO):** Aligned with human preferences using preference pairs.
52
+ * **Tokenizer:** LLaMA 3 Tokenizer (vocab size: 128,256).
53
+
54
+ ## How to Use (with `transformers`)
55
+
56
+ **VERY IMPORTANT NOTE ON EFFICIENCY**
57
+
58
+ > Please do NOT expect performance efficiency gains (in terms of speed, latency, or energy consumption) when using this model with the standard transformers library, even with the required fork.
59
+ >
60
+ > The current execution paths within transformers do not contain the specialized, highly optimized computational kernels required to leverage the advantages of the BitNet architecture. Running the model via transformers will likely result in inference speeds and energy usage comparable to, or potentially worse than, standard full-precision models within this framework on both CPU and GPU.
61
+ >
62
+ > While you might observe reduced memory usage due to the quantized weights, the primary computational efficiency benefits are not accessible through this standard transformers usage path.
63
+ >
64
+ > For achieving the efficiency benefits demonstrated in the technical paper, you MUST use the dedicated C++ implementation: [bitnet.cpp](https://github.com/microsoft/BitNet).
65
+
66
+ ### Requirements
67
+
68
+ ```bash
69
+ pip install git+https://github.com/shumingma/transformers.git
70
+ ```
71
+ We are actively working with the Hugging Face team to integrate the necessary code into the main `transformers` library. This installation method may change in the future.
72
+
73
+ ### Example
74
+
75
+ ```python
76
+ import torch
77
+ from transformers import AutoModelForCausalLM, AutoTokenizer
78
+
79
+ model_id = "microsoft/bitnet-b1.58-2B-4T"
80
+
81
+ # Load tokenizer and model
82
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
83
+ model = AutoModelForCausalLM.from_pretrained(
84
+ model_id,
85
+ torch_dtype=torch.bfloat16
86
+ )
87
+
88
+ # Apply the chat template
89
+ messages = [
90
+ {"role": "system", "content": "You are a helpful AI assistant."},
91
+ {"role": "user", "content": "How are you?"},
92
+ ]
93
+ chat_input = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to(model.device)
94
+
95
+ # Generate response
96
+ chat_outputs = model.generate(chat_input, max_new_tokens=50)
97
+ response = tokenizer.decode(chat_outputs[0][chat_input.shape[-1]:], skip_special_tokens=True) # Decode only the response part
98
+ print("\nAssistant Response:", response)
99
+ ```
100
+
101
+ ## How to Use (with `bitnet.cpp`)
102
+
103
+ Please refer to the [bitnet.cpp](https://github.com/microsoft/BitNet) GitHub repository for detailed compilation steps, usage examples, and command-line options.
104
+
105
+ ## Evaluation
106
+
107
+ BitNet b1.58 2B4T was evaluated against leading open-weight full-precision LLMs of similar size. Below are the key results (all models are instruction-tuned versions):
108
+
109
+ | Benchmark | LLaMA 3.2 1B | Gemma-3 1B | Qwen2.5 1.5B | SmolLM2 1.7B | MiniCPM 2B | **BitNet b1.58 2B** |
110
+ |--------------------------------|--------------|------------|--------------|--------------|------------|---------------------|
111
+ | **Memory (Non-emb)** | 2GB | 1.4GB | 2.6GB | 3.2GB | 4.8GB | **0.4GB** |
112
+ | **Latency (CPU Decoding)** | 48ms | 41ms | 65ms | 67ms | 124ms | **29ms** |
113
+ | **Energy (Estimated)** | 0.258J | 0.186J | 0.347J | 0.425J | 0.649J | **0.028J** |
114
+ | **Training Tokens (Pre-train)**| 9T* | 2T** | 18T | 11T | 1.1T | 4T |
115
+ | ARC-Challenge | 37.80 | 38.40 | 46.67 | 43.52 | 44.80 | **49.91** |
116
+ | ARC-Easy | 63.17 | 63.13 | **76.01** | 62.92 | 72.14 | 74.79 |
117
+ | OpenbookQA | 34.80 | 38.80 | 40.80 | **46.00** | 40.20 | 41.60 |
118
+ | BoolQ | 64.65 | 74.22 | 78.04 | 75.78 | **80.67** | 80.18 |
119
+ | HellaSwag | 60.80 | 57.69 | 68.28 | **71.71** | 70.81 | 68.44 |
120
+ | PIQA | 74.21 | 71.93 | 76.12 | 76.12 | 76.66 | **77.09** |
121
+ | WinoGrande | 59.51 | 58.48 | 62.83 | 68.98 | 61.80 | **71.90** |
122
+ | CommonsenseQA | 58.48 | 42.10 | **76.41** | 63.55 | 71.74 | 71.58 |
123
+ | TruthfulQA | 43.80 | 38.66 | **46.67** | 39.90 | 41.41 | 45.31 |
124
+ | TriviaQA | 37.60 | 23.49 | 38.37 | **45.97** | 34.13 | 33.57 |
125
+ | MMLU | 45.58 | 39.91 | **60.25** | 49.24 | 51.82 | 53.17 |
126
+ | HumanEval+ | 31.10 | 37.20 | **50.60** | 28.00 | 43.90 | 38.40 |
127
+ | GSM8K | 38.21 | 31.16 | 56.79 | 45.11 | 4.40 | **58.38** |
128
+ | MATH-500 | 23.00 | 42.00 | **53.00** | 17.60 | 14.80 | 43.40 |
129
+ | IFEval | 62.71 | **66.67** | 50.12 | 57.91 | 36.81 | 53.48 |
130
+ | MT-bench | 5.43 | 6.40 | 6.12 | 5.50 | **6.57** | 5.85 |
131
+ | **Average** | 44.90 | 43.74 | **55.23** | 48.70 | 42.05 | 54.19 |
132
+
133
+ *LLaMA 3.2 1B uses pruning & distillation.
134
+
135
+ **Gemma-3 1B uses distillation.
136
+
137
+ ## License
138
+ The model weights and code are released under the [MIT License](https://huggingface.co/microsoft/bitnet-b1.58-2B-4T/blob/main/LICENSE).
139
+
140
+ ## Disclaimer
141
+ This model is intended for research and development purposes. While efforts have been made to align it using SFT and DPO, it may still produce outputs that are unexpected, biased, or inaccurate. Please use responsibly.
model.safetensors DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:529637ff6dab1f5890767356928693f69ffe61d3b6040a43de9306b37bfd5ae1
3
- size 4825679400