How to use Qwen/Qwen3-Embedding-8B with sentence-transformers:
from sentence_transformers import SentenceTransformer model = SentenceTransformer("Qwen/Qwen3-Embedding-8B") sentences = [ "The weather is lovely today.", "It's so sunny outside!", "He drove to the stadium." ] embeddings = model.encode(sentences) similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3]
How to use Qwen/Qwen3-Embedding-8B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("feature-extraction", model="Qwen/Qwen3-Embedding-8B")
# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-Embedding-8B") model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-Embedding-8B")
How to make the model generate a 32-dimensional vector
· Sign up or log in to comment