Update README.md
Browse files
README.md
CHANGED
@@ -17,17 +17,34 @@ This lora was trained on 250k post and response pairs from 43 different fincial,
|
|
17 |
* Training code will be released soon.
|
18 |
* Dataset and tools for building the dataset will be released soon.
|
19 |
|
20 |
-
|
21 |
|
22 |
-
|
23 |
|
24 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
25 |
|
26 |
This is a lora, and needs to be loaded with a 7B llama, such as in text-generation-webui, https://github.com/oobabooga/text-generation-webui/blob/main/docs/Using-LoRAs.md
|
27 |
|
28 |
* inference code and other scripts may follow.
|
29 |
|
30 |
-
|
31 |
|
32 |
Editing the system prompt can have some effect on the replies.
|
33 |
```
|
@@ -43,7 +60,7 @@ You are an experienced financial analyst. You are tasked with responding to user
|
|
43 |
<|RESPONSE|>
|
44 |
```
|
45 |
|
46 |
-
|
47 |
|
48 |
```
|
49 |
<|SYSTEM|>
|
@@ -93,7 +110,7 @@ Just make sure it works well enough, and leave it at that.
|
|
93 |
<|END_RESPONSE|>
|
94 |
```
|
95 |
|
96 |
-
|
97 |
|
98 |
In progress.
|
99 |
|
|
|
17 |
* Training code will be released soon.
|
18 |
* Dataset and tools for building the dataset will be released soon.
|
19 |
|
20 |
+
## Training Details
|
21 |
|
22 |
+
1 note worthy change I will mention now, is this was trained with casualLM rather than seq2seq like a number of the other instruct models have been. I can't explain why they used seq2seq for data collators, other than that's what alpaca lora originally used. Llama as a generative model was trained for casualLM so to me it makes sense to use that when fine tuning.
|
23 |
|
24 |
+
* More coming soon.
|
25 |
+
|
26 |
+
### Training Hyperparams
|
27 |
+
|
28 |
+
| Hyperparameter | LLaMA-7B |
|
29 |
+
|----------------|----------|
|
30 |
+
| Learning rate | 2.5e-4 |
|
31 |
+
| Epochs | 3 |
|
32 |
+
| optim | adamw_bnb_8bit |
|
33 |
+
| Warmup step | 300 |
|
34 |
+
| LR scheduler | polynomial |
|
35 |
+
| lora_r | 32 |
|
36 |
+
| lora_alpha | 64 |
|
37 |
+
| lora_dropout | 0.05 |
|
38 |
+
| lora_target_modules | ["q_proj", "v_proj"] |
|
39 |
+
|
40 |
+
|
41 |
+
## Usage
|
42 |
|
43 |
This is a lora, and needs to be loaded with a 7B llama, such as in text-generation-webui, https://github.com/oobabooga/text-generation-webui/blob/main/docs/Using-LoRAs.md
|
44 |
|
45 |
* inference code and other scripts may follow.
|
46 |
|
47 |
+
## Prompting
|
48 |
|
49 |
Editing the system prompt can have some effect on the replies.
|
50 |
```
|
|
|
60 |
<|RESPONSE|>
|
61 |
```
|
62 |
|
63 |
+
## Examples:
|
64 |
|
65 |
```
|
66 |
<|SYSTEM|>
|
|
|
110 |
<|END_RESPONSE|>
|
111 |
```
|
112 |
|
113 |
+
## Evaluation
|
114 |
|
115 |
In progress.
|
116 |
|