Built with Axolotl

See axolotl config

axolotl version: 0.10.0.dev0

# === Start-up Commands ===
# curl -LsSf https://astral.sh/uv/install.sh | sh
# export PATH="$HOME/.local/bin:$PATH"
# uv venv
# source .venv/bin/activate
# git clone https://github.com/axolotl-ai-cloud/axolotl
# cd axolotl
# uv pip install torch==2.5.1 packaging ninja setuptools ftfy deepspeed huggingface_hub[cli,hf_transfer]
# uv pip install "cut-cross-entropy[transformers] @ git+https://github.com/strangedove/ml-cross-entropy.git@gemma3-multimodal"
# uv pip install apollo-torch
# uv pip install --no-build-isolation -e .[flash-attn]
# uv pip install git+https://github.com/huggingface/transformers.git
# uv pip install git+https://github.com/linkedin/Liger-Kernel.git
# export HF_HUB_ENABLE_HF_TRANSFER=1
# huggingface-cli login --token $hf_key && wandb login $wandb_key

# apt update && apt install -y libopenmpi-dev && curl -LsSf https://astral.sh/uv/install.sh | sh && export PATH="$HOME/.local/bin:$PATH" && git clone https://github.com/axolotl-ai-cloud/axolotl && uv venv && source .venv/bin/activate && cd axolotl && uv pip install torch==2.5.1 packaging ninja mpi4py setuptools ftfy deepspeed huggingface_hub[cli,hf_transfer] && uv pip install apollo-torch && uv pip install "cut-cross-entropy[transformers] @ git+https://github.com/strangedove/ml-cross-entropy.git@qwen3" && uv pip install git+https://github.com/linkedin/Liger-Kernel.git && uv pip install --no-build-isolation -e .[flash-attn] && uv pip install git+https://github.com/huggingface/transformers.git && export HF_HUB_ENABLE_HF_TRANSFER=1 && cd .. && huggingface-cli login --token $hf_key && wandb login $wandb_key

# === Model Configuration ===
base_model: Columbidae/Qwen3-30B-A3B-Noisy
load_in_8bit: false
load_in_4bit: true

# === HF Configuration === 
hub_model_id: ToastyPigeon/qwen3-30b-noised-iter1
hub_strategy: "every_save"

# === Training Setup ===
num_epochs: 1
micro_batch_size: 4
gradient_accumulation_steps: 2
sequence_len: 8192
sample_packing: true
pad_to_sequence_len: true

# === Evaluation ===
val_set_size: 300
evals_per_epoch: 10
#eval_table_size:
eval_max_new_tokens: 256
eval_sample_packing: true
#eval_strategy: "no"

# === LoRA Configuration ===
adapter: lora
lora_model_dir:
lora_r: 32
lora_alpha: 32
lora_dropout: 0
lora_target_linear: 
lora_fan_in_fan_out:
lora_target_modules:
  - gate_proj
  - down_proj
  - up_proj
  - q_proj
  - v_proj
  - k_proj
  - o_proj

#lora_mlp_kernel: true
#lora_qkv_kernel: true
#lora_o_kernel: true

# === Hyperparameter Configuration ===
#optimizer: apollo_adamw_layerwise
optimizer: paged_adamw_8bit
# Apollo-mini configuration:
#optim_args: "proj=random,rank=1,scale=128.0,scale_type=tensor,update_proj_gap=200"
# Regular Apollo configuration:
# optim_args: 
#optim_target_modules: all_linear
learning_rate: 1e-5
lr_scheduler: rex
weight_decay: 0.01
warmup_steps: 0
#warmup_ratio: 0.05


# === Data Configuration ===
#chat_template: jinja
#chat_template_jinja: "{{ bos_token }}{% for message in messages %}{% if (message['role'] == 'assistant') %}{% set role = 'model' %}{% else %}{% set role = message['role'] %}{% endif %}{{ '<start_of_turn>' + role + '\n' + message['content'] | trim + '<end_of_turn>\n' }}{% endfor %}{% if add_generation_prompt %}{{'<start_of_turn>model\n'}}{% endif %}"
#special_tokens:
#  eos_token: "<end_of_turn>"
shuffle_merged_datasets: true
datasets:
  - path: ToastyPigeon/mixed-data-for-qwen
    type: chat_template
    data_files: mixed_data_for_qwen_part1.json
    
dataset_prepared_path: last_run_prepared


# === Plugins ===
plugins:
  - axolotl.integrations.liger.LigerPlugin
  - axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin

# === Hardware Optimization ===
gradient_checkpointing: true
#gradient_checkpointing_kwargs:
#  use_reentrant: true
liger_rope: true
liger_rms_norm: true
liger_glu_activation: true
#liger_fused_linear_cross_entropy: true
#unsloth_cross_entropy_loss: true
cut_cross_entropy: true
# Only if using multiple GPUs:
#deepspeed: axolotl/deepspeed_configs/zero2.json

# === Wandb Tracking ===
wandb_project: Qwen3MoE
# wandb_entity: [WANDB_ENTITY]
# wandb_name: [WANDB_RUN_NAME]

# === Checkpointing ===
saves_per_epoch: 10
save_total_limit: 1

# === Advanced Settings ===
output_dir: ./ckpts
bf16: auto
flash_attention: true
train_on_inputs: false
group_by_length: false
save_safetensors: true
logging_steps: 1
gc_steps: 10
seed: 69

qwen3-30b-noised-iter1

This model is a fine-tuned version of Columbidae/Qwen3-30B-A3B-Noisy on the ToastyPigeon/mixed-data-for-qwen dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6300

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 69
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 8
  • optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • num_epochs: 1.0

Training results

Training Loss Epoch Step Validation Loss
0.7597 0.0035 1 0.8862
0.9744 0.1019 29 0.7604
0.8101 0.2039 58 0.6862
0.7025 0.3058 87 0.6667
0.6058 0.4077 116 0.6552
0.5499 0.5097 145 0.6466
0.494 0.6116 174 0.6404
0.6 0.7135 203 0.6358
0.7872 0.8155 232 0.6325
0.7281 0.9174 261 0.6300

Framework versions

  • PEFT 0.15.2
  • Transformers 4.52.0.dev0
  • Pytorch 2.5.1+cu124
  • Datasets 3.5.0
  • Tokenizers 0.21.1
Downloads last month
7
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ToastyPigeon/qwen3-30b-noised-iter1

Adapter
(1)
this model

Dataset used to train ToastyPigeon/qwen3-30b-noised-iter1