Sentence Similarity
sentence-transformers
Safetensors
Transformers
French
bilingual
feature-extraction
sentence-embedding
mteb
custom_code
Eval Results (legacy)
Instructions to use Lajavaness/bilingual-document-embedding with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- sentence-transformers
How to use Lajavaness/bilingual-document-embedding with sentence-transformers:
from sentence_transformers import SentenceTransformer model = SentenceTransformer("Lajavaness/bilingual-document-embedding", trust_remote_code=True) sentences = [ "C'est une personne heureuse", "C'est un chien heureux", "C'est une personne très heureuse", "Aujourd'hui est une journée ensoleillée" ] embeddings = model.encode(sentences) similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [4, 4] - Transformers
How to use Lajavaness/bilingual-document-embedding with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("Lajavaness/bilingual-document-embedding", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
| { | |
| "added_tokens_decoder": { | |
| "0": { | |
| "content": "<s>", | |
| "lstrip": false, | |
| "normalized": false, | |
| "rstrip": false, | |
| "single_word": false, | |
| "special": true | |
| }, | |
| "1": { | |
| "content": "<pad>", | |
| "lstrip": false, | |
| "normalized": false, | |
| "rstrip": false, | |
| "single_word": false, | |
| "special": true | |
| }, | |
| "2": { | |
| "content": "</s>", | |
| "lstrip": false, | |
| "normalized": false, | |
| "rstrip": false, | |
| "single_word": false, | |
| "special": true | |
| }, | |
| "3": { | |
| "content": "<unk>", | |
| "lstrip": false, | |
| "normalized": false, | |
| "rstrip": false, | |
| "single_word": false, | |
| "special": true | |
| }, | |
| "250001": { | |
| "content": "<mask>", | |
| "lstrip": true, | |
| "normalized": false, | |
| "rstrip": false, | |
| "single_word": false, | |
| "special": true | |
| } | |
| }, | |
| "bos_token": "<s>", | |
| "clean_up_tokenization_spaces": true, | |
| "cls_token": "<s>", | |
| "eos_token": "</s>", | |
| "mask_token": "<mask>", | |
| "max_length": 500, | |
| "model_max_length": 8192, | |
| "pad_to_multiple_of": null, | |
| "pad_token": "<pad>", | |
| "pad_token_type_id": 0, | |
| "padding_side": "right", | |
| "sep_token": "</s>", | |
| "sp_model_kwargs": {}, | |
| "stride": 0, | |
| "tokenizer_class": "XLMRobertaTokenizer", | |
| "truncation_side": "right", | |
| "truncation_strategy": "longest_first", | |
| "unk_token": "<unk>" | |
| } | |