alexmarques commited on
Commit
343b418
·
verified ·
1 Parent(s): c1e6631

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -15
README.md CHANGED
@@ -29,7 +29,7 @@ license: llama3.1
29
  - **Model Developers:** Neural Magic
30
 
31
  Quantized version of [Meta-Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct).
32
- It achieves scores within 3% of the scores of the unquantized model for MMLU, ARC-Challenge, GSM-8k, Hellaswag, Winogrande and TruthfulQA.
33
 
34
  ### Model Optimizations
35
 
@@ -154,9 +154,9 @@ This version of the lm-evaluation-harness includes versions of ARC-Challenge and
154
  </td>
155
  <td>82.21
156
  </td>
157
- <td>79.91
158
  </td>
159
- <td>97.2%
160
  </td>
161
  </tr>
162
  <tr>
@@ -164,9 +164,9 @@ This version of the lm-evaluation-harness includes versions of ARC-Challenge and
164
  </td>
165
  <td>95.05
166
  </td>
167
- <td>93.09
168
  </td>
169
- <td>97.9%
170
  </td>
171
  </tr>
172
  <tr>
@@ -174,9 +174,9 @@ This version of the lm-evaluation-harness includes versions of ARC-Challenge and
174
  </td>
175
  <td>93.10
176
  </td>
177
- <td>93.18
178
  </td>
179
- <td>100.1%
180
  </td>
181
  </tr>
182
  <tr>
@@ -184,9 +184,9 @@ This version of the lm-evaluation-harness includes versions of ARC-Challenge and
184
  </td>
185
  <td>86.40
186
  </td>
187
- <td>85.46
188
  </td>
189
- <td>98.9%
190
  </td>
191
  </tr>
192
  <tr>
@@ -194,9 +194,9 @@ This version of the lm-evaluation-harness includes versions of ARC-Challenge and
194
  </td>
195
  <td>85.00
196
  </td>
197
- <td>85.24
198
  </td>
199
- <td>100.3%
200
  </td>
201
  </tr>
202
  <tr>
@@ -204,9 +204,9 @@ This version of the lm-evaluation-harness includes versions of ARC-Challenge and
204
  </td>
205
  <td>59.83
206
  </td>
207
- <td>58.55
208
  </td>
209
- <td>97.9%
210
  </td>
211
  </tr>
212
  <tr>
@@ -214,9 +214,9 @@ This version of the lm-evaluation-harness includes versions of ARC-Challenge and
214
  </td>
215
  <td><strong>83.60</strong>
216
  </td>
217
- <td><strong>82.57</strong>
218
  </td>
219
- <td><strong>98.8%</strong>
220
  </td>
221
  </tr>
222
  </table>
 
29
  - **Model Developers:** Neural Magic
30
 
31
  Quantized version of [Meta-Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct).
32
+ It achieves scores within 1% of the scores of the unquantized model for MMLU, ARC-Challenge, GSM-8k, Hellaswag, Winogrande and TruthfulQA.
33
 
34
  ### Model Optimizations
35
 
 
154
  </td>
155
  <td>82.21
156
  </td>
157
+ <td>81.88
158
  </td>
159
+ <td>99.6%
160
  </td>
161
  </tr>
162
  <tr>
 
164
  </td>
165
  <td>95.05
166
  </td>
167
+ <td>94.97
168
  </td>
169
+ <td>99.9%
170
  </td>
171
  </tr>
172
  <tr>
 
174
  </td>
175
  <td>93.10
176
  </td>
177
+ <td>93.25
178
  </td>
179
+ <td>100.2%
180
  </td>
181
  </tr>
182
  <tr>
 
184
  </td>
185
  <td>86.40
186
  </td>
187
+ <td>86.28
188
  </td>
189
+ <td>99.9%
190
  </td>
191
  </tr>
192
  <tr>
 
194
  </td>
195
  <td>85.00
196
  </td>
197
+ <td>85.00
198
  </td>
199
+ <td>100.0%
200
  </td>
201
  </tr>
202
  <tr>
 
204
  </td>
205
  <td>59.83
206
  </td>
207
+ <td>60.88
208
  </td>
209
+ <td>101.8%
210
  </td>
211
  </tr>
212
  <tr>
 
214
  </td>
215
  <td><strong>83.60</strong>
216
  </td>
217
+ <td><strong>83.71</strong>
218
  </td>
219
+ <td><strong>100.1%</strong>
220
  </td>
221
  </tr>
222
  </table>