Papers
arxiv:2205.04733

From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective

Published on May 10, 2022
Authors:

Abstract

Sparse representation learning and neural retrievers with dense representations both benefit from training improvements like distillation and negative mining, leading to state-of-the-art performance in in-domain and zero-shot scenarios.

AI-generated summary

Neural retrievers based on dense representations combined with Approximate Nearest Neighbors search have recently received a lot of attention, owing their success to distillation and/or better sampling of examples for training -- while still relying on the same backbone architecture. In the meantime, sparse representation learning fueled by traditional inverted indexing techniques has seen a growing interest, inheriting from desirable IR priors such as explicit lexical matching. While some architectural variants have been proposed, a lesser effort has been put in the training of such models. In this work, we build on SPLADE -- a sparse expansion-based retriever -- and show to which extent it is able to benefit from the same training improvements as dense models, by studying the effect of distillation, hard-negative mining as well as the Pre-trained Language Model initialization. We furthermore study the link between effectiveness and efficiency, on in-domain and zero-shot settings, leading to state-of-the-art results in both scenarios for sufficiently expressive models.

Community

Sign up or log in to comment

Models citing this paper 24

Browse 24 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2205.04733 in a dataset README.md to link it from this page.

Spaces citing this paper 15

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.