metadata
license: apache-2.0
base_model:
- meta-llama/Llama-3.1-8B-Instruct
Finetune-RAG Model Checkpoints
This repository contains model checkpoints from the Finetune-RAG project, which aims to tackle hallucination in retrieval-augmented LLMs. Checkpoints here are saved at steps 12, 14, 16, 18, and 20 from baseline-format fine-tuning of Llama-3.1-8B-Instruct on Finetune-RAG.
Paper & Citation
@misc{lee2025finetuneragfinetuninglanguagemodels,
title={Finetune-RAG: Fine-Tuning Language Models to Resist Hallucination in Retrieval-Augmented Generation},
author={Zhan Peng Lee and Andre Lin and Calvin Tan},
year={2025},
eprint={2505.10792},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.10792},
}