Papers
arxiv:2602.12278

AttentionRetriever: Attention Layers are Secretly Long Document Retrievers

Published on Feb 12
Authors:
,
,
,

Abstract

AttentionRetriever is a novel long document retrieval model that uses attention mechanisms and entity-based retrieval to create context-aware embeddings and determine retrieval scope, outperforming existing models while maintaining efficiency.

AI-generated summary

Retrieval augmented generation (RAG) has been widely adopted to help Large Language Models (LLMs) to process tasks involving long documents. However, existing retrieval models are not designed for long document retrieval and fail to address several key challenges of long document retrieval, including context-awareness, causal dependence, and scope of retrieval. In this paper, we proposed AttentionRetriever, a novel long document retrieval model that leverages attention mechanism and entity-based retrieval to build context-aware embeddings for long document and determine the scope of retrieval. With extensive experiments, we found AttentionRetriever is able to outperform existing retrieval models on long document retrieval datasets by a large margin while remaining as efficient as dense retrieval models.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2602.12278 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2602.12278 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.12278 in a Space README.md to link it from this page.

Collections including this paper 2