Datasets:

Languages:
English
ArXiv:
License:
ListT5-train-data / README.md
Soyoung97's picture
Create README.md
ff386a9 verified
metadata
extra_gated_prompt: >
  By accessing this dataset, you agree to comply with the original BEIR/MSMARCO
  license, which permits usage for academic purposes only. We disclaim any
  responsibility for copyright issues.
license: bigscience-openrail-m
language:
  - en

ListT5-train-data

The dataset I used when I trained ListT5 models.

License

This dataset adheres to the original BEIR/MSMARCO license, allowing usage solely for academic purposes. We hold no responsibility for any copyright issues.

Terms of Use

By accessing this dataset, you agree to the following terms:

  • The dataset is to be used exclusively for academic purposes.
  • We are not liable for any copyright issues arising from the use of this dataset.

Dataset Structure

image/png

Tips for training

I have trained the ListT5 model for only 20k steps (20000 step) and then did early exit. Referencing from the paper: "...As a result, we report the T5-base model trained for 20k steps with a learning rate of 1×10−4 and T5-3B for 3k steps with a learning rate of 1 × 10−5 ..." As a result, this result in the model running approximately 0~1 epochs of full data. The model may not need to see the whole data for training.

References

If you find this paper & source code useful, please consider citing our paper:

@misc{yoon2024listt5listwisererankingfusionindecoder,
      title={ListT5: Listwise Reranking with Fusion-in-Decoder Improves Zero-shot Retrieval}, 
      author={Soyoung Yoon and Eunbi Choi and Jiyeon Kim and Hyeongu Yun and Yireun Kim and Seung-won Hwang},
      year={2024},
      eprint={2402.15838},
      archivePrefix={arXiv},
      primaryClass={cs.IR},
      url={https://arxiv.org/abs/2402.15838}, 
}

Contact

For further inquiries, please contact: