TraceBack-12b / README.md
secemp9's picture
Upload folder using huggingface_hub
a0e37ea verified
|
raw
history blame
3.01 kB
metadata
library_name: peft
license: apache-2.0
base_model: unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit
tags:
  - generated_from_trainer
datasets:
  - instruction_solution_to_thought_dataset.jsonl
model-index:
  - name: outputs_solution_to_thought
    results: []

Built with Axolotl

See axolotl config

axolotl version: 0.7.0

# Base model configuration
base_model: unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit
load_in_4bit: true

# Dataset configuration
datasets:
  - path: instruction_solution_to_thought_dataset.jsonl
    type: chat_template

# Chat template
chat_template: chatml

# LoRA adapter configuration
adapter: lora
lora_r: 16
lora_alpha: 16
lora_dropout: 0
lora_target_modules:
  - q_proj
  - k_proj
  - v_proj
  - o_proj
  - gate_proj
  - up_proj
  - down_proj

# Training hyperparameters
max_seq_length: 128000
micro_batch_size: 2
gradient_accumulation_steps: 8
learning_rate: 3e-5
num_epochs: 2
warmup_steps: 100
optimizer: adamw_8bit
weight_decay: 0.01
lr_scheduler_type: cosine
max_grad_norm: 1.0
output_dir: ./outputs_solution_to_thought
seed: 3407
merge_lora: true
hf_upload: true
hf_repo: secemp9/TraceBack-12b
xformers_attention:
flash_attention: True
#lora_mlp_kernel: true
#lora_qkv_kernel: true
#lora_o_kernel: true
#fp16: true
#load_in_8bit: true  # Enable 8-bit loading for LoRA finetuning
bf16: true          # Enable BF16 mixed precision
# Multi-GPU training with DeepSpeed
deepspeed: deepspeed_configs/zero2.json

# Optional: Enable gradient checkpointing
gradient_checkpointing: true

outputs_solution_to_thought

This model is a fine-tuned version of unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit on the instruction_solution_to_thought_dataset.jsonl dataset.

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 3407
  • distributed_type: multi-GPU
  • num_devices: 8
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 128
  • total_eval_batch_size: 16
  • optimizer: Use adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 2.0

Training results

Framework versions

  • PEFT 0.14.0
  • Transformers 4.48.3
  • Pytorch 2.5.1+cu124
  • Datasets 3.2.0
  • Tokenizers 0.21.0