--- library_name: transformers license: other base_model: /home/bl3615/data/Goedel-Prover-SFT tags: - llama-factory - full - generated_from_trainer model-index: - name: dpo_dpo_lean_0_b0.03_f0_lr5e-6_e2 results: [] --- # dpo_dpo_lean_0_b0.03_f0_lr5e-6_e2 This model is a fine-tuned version of [/home/bl3615/data/Goedel-Prover-SFT](https://huggingface.co//home/bl3615/data/Goedel-Prover-SFT) on the dpo_lean_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.5357 - Rewards/chosen: -4.1002 - Rewards/rejected: -5.8239 - Rewards/accuracies: 0.7566 - Rewards/margins: 1.7237 - Logps/chosen: -209.7086 - Logps/rejected: -266.5144 - Logits/chosen: -13.4579 - Logits/rejected: -12.9861 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - total_eval_batch_size: 8 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 2.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/chosen | Logps/rejected | Logits/chosen | Logits/rejected | |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:------------:|:--------------:|:-------------:|:---------------:| | 0.5867 | 0.5329 | 500 | 0.5393 | -1.7260 | -2.3840 | 0.7138 | 0.6580 | -130.5684 | -151.8518 | -4.1731 | -3.9948 | | 0.1708 | 1.0650 | 1000 | 0.5134 | -4.1008 | -5.5590 | 0.7533 | 1.4582 | -209.7289 | -257.6841 | -11.3604 | -10.9174 | | 0.1112 | 1.5979 | 1500 | 0.5428 | -4.2436 | -5.9501 | 0.7599 | 1.7065 | -214.4901 | -270.7217 | -14.0546 | -13.5623 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.6.0+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0