Add pipeline tag text-ranking
Browse filesThis PR adds the `pipeline_tag: text-ranking` to the model card metadata to improve the model's discoverability on the Hugging Face Hub. This tag ensures that the model appears in relevant search results for researchers working on text ranking tasks.
README.md
CHANGED
@@ -1,21 +1,22 @@
|
|
1 |
---
|
|
|
|
|
2 |
library_name: treehop-rag
|
3 |
license: mit
|
|
|
4 |
tags:
|
5 |
- Information Retrieval
|
6 |
- Retrieval-Augmented Generation
|
7 |
- model_hub_mixin
|
8 |
- multi-hop question answering
|
9 |
- pytorch_model_hub_mixin
|
10 |
-
base_model:
|
11 |
-
- BAAI/bge-m3
|
12 |
---
|
13 |
|
14 |
-
|
15 |
# TreeHop: Generate and Filter Next Query Embeddings Efficiently for Multi-hop Question Answering
|
16 |
|
17 |
|
18 |
[](https://arxiv.org/abs/2504.20114)
|
|
|
19 |
[](https://img.shields.io/badge/license-MIT-blue)
|
20 |
[](https://www.python.org/downloads/)
|
21 |
|
@@ -36,6 +37,7 @@ base_model:
|
|
36 |
## Introduction
|
37 |
TreeHop is a lightweight, embedding-level framework designed to address the computational inefficiencies of traditional recursive retrieval paradigm in the realm of Retrieval-Augmented Generation (RAG). By eliminating the need for iterative LLM-based query rewriting, TreeHop significantly reduces latency while maintaining state-of-the-art performance. It achieves this through dynamic query embedding updates and pruning strategies, enabling a streamlined "Retrieve-Embed-Retrieve" workflow.
|
38 |
|
|
|
39 |
|
40 |
## Why TreeHop for Multi-hop Retrieval?
|
41 |
- **Handle Complex Queries**: Real-world questions often require multiple hops to retrieve relevant information, which traditional retrieval methods struggle with.
|
@@ -43,6 +45,7 @@ TreeHop is a lightweight, embedding-level framework designed to address the comp
|
|
43 |
- **Speed**: 99% faster inference compared to iterative LLM approaches, ideal for industrial applications where response speed is crucial.
|
44 |
- **Performant**: Maintains high recall with controlled number of retrieved passages, ensuring relevance without overwhelming the system.
|
45 |
|
|
|
46 |
|
47 |
|
48 |
## System Requirement
|
|
|
1 |
---
|
2 |
+
base_model:
|
3 |
+
- BAAI/bge-m3
|
4 |
library_name: treehop-rag
|
5 |
license: mit
|
6 |
+
pipeline_tag: text-ranking
|
7 |
tags:
|
8 |
- Information Retrieval
|
9 |
- Retrieval-Augmented Generation
|
10 |
- model_hub_mixin
|
11 |
- multi-hop question answering
|
12 |
- pytorch_model_hub_mixin
|
|
|
|
|
13 |
---
|
14 |
|
|
|
15 |
# TreeHop: Generate and Filter Next Query Embeddings Efficiently for Multi-hop Question Answering
|
16 |
|
17 |
|
18 |
[](https://arxiv.org/abs/2504.20114)
|
19 |
+
[](https://huggingface.co/allen-li1231/treehop-rag)
|
20 |
[](https://img.shields.io/badge/license-MIT-blue)
|
21 |
[](https://www.python.org/downloads/)
|
22 |
|
|
|
37 |
## Introduction
|
38 |
TreeHop is a lightweight, embedding-level framework designed to address the computational inefficiencies of traditional recursive retrieval paradigm in the realm of Retrieval-Augmented Generation (RAG). By eliminating the need for iterative LLM-based query rewriting, TreeHop significantly reduces latency while maintaining state-of-the-art performance. It achieves this through dynamic query embedding updates and pruning strategies, enabling a streamlined "Retrieve-Embed-Retrieve" workflow.
|
39 |
|
40 |
+

|
41 |
|
42 |
## Why TreeHop for Multi-hop Retrieval?
|
43 |
- **Handle Complex Queries**: Real-world questions often require multiple hops to retrieve relevant information, which traditional retrieval methods struggle with.
|
|
|
45 |
- **Speed**: 99% faster inference compared to iterative LLM approaches, ideal for industrial applications where response speed is crucial.
|
46 |
- **Performant**: Maintains high recall with controlled number of retrieved passages, ensuring relevance without overwhelming the system.
|
47 |
|
48 |
+

|
49 |
|
50 |
|
51 |
## System Requirement
|