Update README.md
Browse files
README.md
CHANGED
@@ -4,4 +4,18 @@ base_model:
|
|
4 |
- meta-llama/Llama-3.1-8B-Instruct
|
5 |
---
|
6 |
# Finetune-RAG Model Checkpoints
|
7 |
-
This repository contains model checkpoints from the [Finetune-RAG](https://github.com/Pints-AI/Finetune-Bench-RAG) project, which aims to tackle hallucination in retrieval-augmented LLMs. Checkpoints here are saved at steps 12, 14, 16, 18, and 20 from baseline-format fine-tuning of Llama-3.1-8B-Instruct on Finetune-RAG.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
- meta-llama/Llama-3.1-8B-Instruct
|
5 |
---
|
6 |
# Finetune-RAG Model Checkpoints
|
7 |
+
This repository contains model checkpoints from the [Finetune-RAG](https://github.com/Pints-AI/Finetune-Bench-RAG) project, which aims to tackle hallucination in retrieval-augmented LLMs. Checkpoints here are saved at steps 12, 14, 16, 18, and 20 from baseline-format fine-tuning of Llama-3.1-8B-Instruct on Finetune-RAG.
|
8 |
+
|
9 |
+
## Paper & Citation
|
10 |
+
|
11 |
+
```latex
|
12 |
+
@misc{lee2025finetuneragfinetuninglanguagemodels,
|
13 |
+
title={Finetune-RAG: Fine-Tuning Language Models to Resist Hallucination in Retrieval-Augmented Generation},
|
14 |
+
author={Zhan Peng Lee and Andre Lin and Calvin Tan},
|
15 |
+
year={2025},
|
16 |
+
eprint={2505.10792},
|
17 |
+
archivePrefix={arXiv},
|
18 |
+
primaryClass={cs.CL},
|
19 |
+
url={https://arxiv.org/abs/2505.10792},
|
20 |
+
}
|
21 |
+
```
|