Add links to paper and Github repository

#3
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +14 -9
README.md CHANGED
@@ -1,20 +1,21 @@
1
  ---
2
  base_model: LGAI-EXAONE/EXAONE-3.5-32B-Instruct
3
- base_model_relation: finetune
4
- license: other
5
- license_name: exaone
6
- license_link: LICENSE
7
  language:
8
  - en
9
  - ko
 
 
 
 
 
10
  tags:
11
  - lg-ai
12
  - exaone
13
  - exaone-deep
14
- pipeline_tag: text-generation
15
- library_name: transformers
16
  ---
17
 
 
18
  <p align="center">
19
  <img src="assets/EXAONE_Symbol+BI_3d.png", width="300", style="margin: 40 auto;">
20
  <br>
@@ -242,8 +243,11 @@ We are working on quantized versions of EXAONE Deep models in both **AWQ** and *
242
 
243
  To achieve the expected performance, we recommend using the following configurations:
244
 
245
- 1. Ensure the model starts with `<thought>\n` for reasoning steps. The model's output quality may be degraded when you omit it. You can easily apply this feature by using `tokenizer.apply_chat_template()` with `add_generation_prompt=True`. Please check the example code on [Quickstart](#quickstart) section.
246
- 2. The reasoning steps of EXAONE Deep models enclosed by `<thought>\n...\n</thought>` usually have lots of tokens, so previous reasoning steps may be necessary to be removed in multi-turn situation. The provided tokenizer handles this automatically.
 
 
 
247
  3. Avoid using system prompt, and build the instruction on the user prompt.
248
  4. Additional instructions help the models reason more deeply, so that the models generate better output.
249
  - For math problems, the instructions **"Please reason step by step, and put your final answer within \boxed{}."** are helpful.
@@ -281,4 +285,5 @@ The model is licensed under [EXAONE AI Model License Agreement 1.1 - NC](./LICEN
281
  ```
282
 
283
  ## Contact
284
- LG AI Research Technical Support: [email protected]
 
 
1
  ---
2
  base_model: LGAI-EXAONE/EXAONE-3.5-32B-Instruct
 
 
 
 
3
  language:
4
  - en
5
  - ko
6
+ library_name: transformers
7
+ license: other
8
+ license_name: exaone
9
+ license_link: LICENSE
10
+ pipeline_tag: text-generation
11
  tags:
12
  - lg-ai
13
  - exaone
14
  - exaone-deep
15
+ base_model_relation: finetune
 
16
  ---
17
 
18
+ ```markdown
19
  <p align="center">
20
  <img src="assets/EXAONE_Symbol+BI_3d.png", width="300", style="margin: 40 auto;">
21
  <br>
 
243
 
244
  To achieve the expected performance, we recommend using the following configurations:
245
 
246
+ 1. Ensure the model starts with `<thought>
247
+ ` for reasoning steps. The model's output quality may be degraded when you omit it. You can easily apply this feature by using `tokenizer.apply_chat_template()` with `add_generation_prompt=True`. Please check the example code on [Quickstart](#quickstart) section.
248
+ 2. The reasoning steps of EXAONE Deep models enclosed by `<thought>
249
+ ...
250
+ </thought>` usually have lots of tokens, so previous reasoning steps may be necessary to be removed in multi-turn situation. The provided tokenizer handles this automatically.
251
  3. Avoid using system prompt, and build the instruction on the user prompt.
252
  4. Additional instructions help the models reason more deeply, so that the models generate better output.
253
  - For math problems, the instructions **"Please reason step by step, and put your final answer within \boxed{}."** are helpful.
 
285
  ```
286
 
287
  ## Contact
288
+ LG AI Research Technical Support: [email protected]
289
+ ```