See axolotl config
axolotl version: 0.8.0
base_model: Dans-DiscountModels/mistral-7b-v0.3-ChatML
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
trust_remote_code:
# wandb configuration
wandb_project: 7b-m-dans-personalityengine
wandb_watch:
wandb_run_id: V1.2.1-4-1 # V{Version}-{Run Number}-{Attempt Number}
wandb_log_model:
# push checkpoints to hub
hub_model_id: Dans-DiscountModels/7b-m-dans-personalityengine-v1.2.1-rc-5
# how to push checkpoints to hub
# https://huggingface.co/docs/transformers/v4.31.0/en/main_classes/trainer#transformers.TrainingArguments.hub_strategy
hub_strategy: "every_save"
# Whether to use hf `use_auth_token` for loading datasets. Useful for fetching private datasets
# Required to be true when used in combination with `push_dataset_to_hub`
hf_use_auth_token: true
# where to save the finished model to
output_dir: ./7b-m-dans-personalityengine
# where to save the dataset to
dataset_prepared_path: ./7b-m-dans-personalityengine-data
save_safetensors: true
# dataset settings (local or huggingface repo)
datasets:
- path: Dans-DiscountModels/pretokenization-test-2
ds_type: parquet
type:
plugins:
- axolotl.integrations.liger.LigerPlugin
- axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
liger_rope: true
liger_rms_norm: true
liger_layer_norm: true
liger_glu_activation: true
liger_fused_linear_cross_entropy: false
cut_cross_entropy: true
load_in_8bit: false
load_in_4bit: false
strict: false
val_set_size: 0.005
sequence_len: 32768
sample_packing: true
eval_sample_packing: true
pad_to_sequence_len: true
gradient_checkpointing: true
# gradient_checkpointing_kwargs:
# use_reentrant: false
gradient_accumulation_steps: 2
micro_batch_size: 2
num_epochs: 1
optimizer: ademamix_8bit
optim_args: "beta1=0.9,beta2=0.999,beta3=0.999,alpha=10"
lr_scheduler: rex
learning_rate: 0.00000015
cosine_min_lr_ratio: 0.1
# weight_decay: 0.03
max_grad_norm: 0.001
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false
early_stopping_patience:
resume_from_checkpoint:
auto_resume_from_checkpoints: false
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_ratio: 0.03
evals_per_epoch: 24
eval_table_size:
eval_max_new_tokens:
saves_per_epoch: 2
save_total_limit: 1
debug: false
deepspeed: deepspeed_configs/zero3_bf16.json
fsdp:
fsdp_config:
special_tokens:
7b-m-dans-personalityengine-v1.2.1-rc-5
This model is a fine-tuned version of Dans-DiscountModels/mistral-7b-v0.3-ChatML on the Dans-DiscountModels/pretokenization-test-2 dataset. It achieves the following results on the evaluation set:
- Loss: 1.4047
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.5e-07
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- total_eval_batch_size: 16
- optimizer: Use ademamix_8bit and the args are: beta1=0.9,beta2=0.999,beta3=0.999,alpha=10
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 43
- num_epochs: 1.0
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.5957 | 0.0007 | 1 | 1.5418 |
1.487 | 0.0417 | 61 | 1.4982 |
1.5851 | 0.0833 | 122 | 1.4720 |
1.3702 | 0.125 | 183 | 1.4596 |
1.5285 | 0.1667 | 244 | 1.4519 |
1.4809 | 0.2083 | 305 | 1.4461 |
1.3806 | 0.25 | 366 | 1.4414 |
1.5097 | 0.2917 | 427 | 1.4373 |
1.497 | 0.3333 | 488 | 1.4338 |
1.503 | 0.375 | 549 | 1.4306 |
1.384 | 0.4167 | 610 | 1.4278 |
1.4191 | 0.4583 | 671 | 1.4252 |
1.3042 | 0.5 | 732 | 1.4228 |
1.5669 | 0.5417 | 793 | 1.4206 |
1.4239 | 0.5833 | 854 | 1.4185 |
1.4472 | 0.625 | 915 | 1.4165 |
1.4692 | 0.6667 | 976 | 1.4147 |
1.4358 | 0.7083 | 1037 | 1.4130 |
1.4676 | 0.75 | 1098 | 1.4114 |
1.4657 | 0.7917 | 1159 | 1.4099 |
1.424 | 0.8333 | 1220 | 1.4085 |
1.3385 | 0.875 | 1281 | 1.4072 |
1.4373 | 0.9167 | 1342 | 1.4061 |
1.4226 | 0.9583 | 1403 | 1.4052 |
1.4225 | 1.0 | 1464 | 1.4047 |
Framework versions
- Transformers 4.51.3
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
- Downloads last month
- 9
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for Dans-DiscountModels/7b-m-dans-personalityengine-v1.2.1-rc-5
Base model
Dans-DiscountModels/mistral-7b-v0.3-ChatML