Transformers
axolotl
Generated from Trainer
linabot-final / README.md
Alignment-Lab-AI's picture
Upload README.md with huggingface_hub
628a488 verified
metadata
library_name: transformers
license: apache-2.0
base_model: mistralai/Mistral-Nemo-Instruct-2407
tags:
  - axolotl
  - generated_from_trainer
datasets:
  - linabot/train_data
model-index:
  - name: linabot
    results: []

Built with Axolotl

See axolotl config

axolotl version: 0.8.0

base_model: mistralai/Mistral-Nemo-Instruct-2407
model_type: MistralForCausalLM
hub_model_id: Alignment-Lab-AI/linabot
strict: false
chat_template: tokenizer_default
plugins:
  - axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_glu_activation: true
liger_layer_norm: true
liger_fused_linear_cross_entropy: true
datasets:
  - path: linabot/train_data
    type: chat_template
    field_messages: messages
    message_property_mappings:
      role: role
      content: content
    roles_to_train: ['assistant', 'user']
    train_on_eos: turn

learning_rate: 2e-5
lr_scheduler: cosine
weight_decay: 0.03
warmup_steps: 450
dataset_prepared_path:
val_set_size: 0.2
output_dir: ./outputs/out

sequence_len: 10400
sample_packing: true
pad_to_sequence_len: true
eval_sample_packing: true

wandb_project: linabot
wandb_entity:
wandb_watch: all
wandb_name:
wandb_log_model:

gradient_accumulation_steps: 1
micro_batch_size: 4
num_epochs: 5
optimizer: adalomo
lr_scheduler: cosine
learning_rate: 0.0002024
flash_attention: true
flash_attn_cross_entropy: false
flash_attn_rms_norm: true
flash_attn_fuse_qkv: false
flash_attn_fuse_mlp: true
torch_compile_mode: "max-autotune"
bf16: auto
tf32: false

gradient_checkpointing: true
resume_from_checkpoint:
logging_steps: 1

evals_per_epoch: 8
saves_per_epoch: 1
weight_decay: 0.03
special_tokens:
  bos_token: "<s>"
  eos_token: "</s>"
  pad_token: "<pad>"

linabot

This model is a fine-tuned version of mistralai/Mistral-Nemo-Instruct-2407 on the linabot/train_data dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0558

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002024
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Use OptimizerNames.ADALOMO and the args are: No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 450
  • num_epochs: 5.0

Training results

Training Loss Epoch Step Validation Loss
1.526 0.0083 1 1.5474
1.5934 0.125 15 1.5472
1.5242 0.25 30 1.5454
1.5296 0.375 45 1.5408
1.5087 0.5 60 1.5322
1.486 0.625 75 1.5188
1.4314 0.75 90 1.5005
1.4311 0.875 105 1.4782
1.4532 1.0 120 1.4513
1.4215 1.125 135 1.4198
1.3248 1.25 150 1.3825
1.2697 1.375 165 1.3386
1.3281 1.5 180 1.2880
1.2428 1.625 195 1.2296
1.1533 1.75 210 1.1596
1.1038 1.875 225 1.0747
1.0226 2.0 240 0.9723
0.8858 2.125 255 0.8467
0.6762 2.25 270 0.7047
0.6433 2.375 285 0.5626
0.4017 2.5 300 0.4283
0.2875 2.625 315 0.3072
0.2244 2.75 330 0.2161
0.1445 2.875 345 0.1572
0.0898 3.0 360 0.1192
0.0666 3.125 375 0.0991
0.0605 3.25 390 0.0855
0.0457 3.375 405 0.0757
0.052 3.5 420 0.0700
0.0634 3.625 435 0.0658
0.0364 3.75 450 0.0623
0.045 3.875 465 0.0601
0.0395 4.0 480 0.0582
0.0558 4.125 495 0.0573
0.0468 4.25 510 0.0566
0.0399 4.375 525 0.0562
0.0337 4.5 540 0.0560
0.0413 4.625 555 0.0559
0.0318 4.75 570 0.0558
0.0435 4.875 585 0.0558
0.0445 5.0 600 0.0558

Framework versions

  • Transformers 4.51.1
  • Pytorch 2.6.0+cu124
  • Datasets 3.5.0
  • Tokenizers 0.21.1