ShieldX commited on
Commit
2404ef6
·
verified ·
1 Parent(s): c1bb4ad

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +12 -53
README.md CHANGED
@@ -1,63 +1,22 @@
1
  ---
 
 
2
  license: apache-2.0
3
- library_name: peft
4
  tags:
5
- - trl
6
- - sft
7
  - unsloth
8
- - generated_from_trainer
9
- datasets:
10
- - generator
11
  base_model: unsloth/tinyllama
12
- model-index:
13
- - name: manovyadh-1.1B-v1
14
- results: []
15
  ---
16
 
17
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
18
- should probably proofread and complete it, then remove this comment. -->
19
-
20
- # manovyadh-1.1B-v1
21
-
22
- This model is a fine-tuned version of [unsloth/tinyllama](https://huggingface.co/unsloth/tinyllama) on the generator dataset.
23
-
24
- ## Model description
25
-
26
- More information needed
27
-
28
- ## Intended uses & limitations
29
-
30
- More information needed
31
-
32
- ## Training and evaluation data
33
-
34
- More information needed
35
-
36
- ## Training procedure
37
-
38
- ### Training hyperparameters
39
-
40
- The following hyperparameters were used during training:
41
- - learning_rate: 2e-05
42
- - train_batch_size: 2
43
- - eval_batch_size: 8
44
- - seed: 3407
45
- - gradient_accumulation_steps: 4
46
- - total_train_batch_size: 8
47
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
- - lr_scheduler_type: linear
49
- - lr_scheduler_warmup_ratio: 0.1
50
- - num_epochs: 3
51
- - mixed_precision_training: Native AMP
52
-
53
- ### Training results
54
-
55
 
 
 
 
56
 
57
- ### Framework versions
58
 
59
- - PEFT 0.7.1
60
- - Transformers 4.38.0.dev0
61
- - Pytorch 2.1.0+cu121
62
- - Datasets 2.16.1
63
- - Tokenizers 0.15.1
 
1
  ---
2
+ language:
3
+ - en
4
  license: apache-2.0
 
5
  tags:
6
+ - text-generation-inference
7
+ - transformers
8
  - unsloth
9
+ - llama
10
+ - trl
 
11
  base_model: unsloth/tinyllama
 
 
 
12
  ---
13
 
14
+ # Uploaded model
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
 
16
+ - **Developed by:** ShieldX
17
+ - **License:** apache-2.0
18
+ - **Finetuned from model :** unsloth/tinyllama
19
 
20
+ This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
21
 
22
+ [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)