Papers
arxiv:2602.15547

jina-embeddings-v5-text: Task-Targeted Embedding Distillation

Published on Feb 17
· Submitted by
Han Xiao
on Feb 18

Abstract

Compact text embedding models are developed through a combined training approach using distillation and contrastive loss, achieving state-of-the-art performance while supporting long-context sequences and efficient quantization.

AI-generated summary

Text embedding models are widely used for semantic similarity tasks, including information retrieval, clustering, and classification. General-purpose models are typically trained with single- or multi-stage processes using contrastive loss functions. We introduce a novel training regimen that combines model distillation techniques with task-specific contrastive loss to produce compact, high-performance embedding models. Our findings suggest that this approach is more effective for training small models than purely contrastive or distillation-based training paradigms alone. Benchmark scores for the resulting models, jina-embeddings-v5-text-small and jina-embeddings-v5-text-nano, exceed or match the state-of-the-art for models of similar size. jina-embeddings-v5-text models additionally support long texts (up to 32k tokens) in many languages, and generate embeddings that remain robust under truncation and binary quantization. Model weights are publicly available, hopefully inspiring further advances in embedding model development.

Community

Paper author Paper submitter

our 5th gen of jina embeddings

Sign up or log in to comment

Models citing this paper 2

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2602.15547 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.15547 in a Space README.md to link it from this page.

Collections including this paper 1