Instructions to use ryandono/osgrep-coderank-q8 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers.js
How to use ryandono/osgrep-coderank-q8 with Transformers.js:
// npm i @huggingface/transformers import { pipeline } from '@huggingface/transformers'; // Allocate pipeline const pipe = await pipeline('feature-extraction', 'ryandono/osgrep-coderank-q8');
osgrep-coderank-q8
This model is a quantized (Int8 / Q8) version of nomic-ai/CodeRankEmbed export for use with transformers.js.
It is used as the primary dense retrieval model in osgrep.
Original Model License
This model is a derivative work of nomic-ai/CodeRankEmbed, licensed under the MIT License.
Please refer to the original model card for citation and full license details.
- Downloads last month
- 8
Model tree for ryandono/osgrep-coderank-q8
Base model
Snowflake/snowflake-arctic-embed-m-long Finetuned
nomic-ai/CodeRankEmbed
// npm i @huggingface/transformers import { pipeline } from '@huggingface/transformers'; // Allocate pipeline const pipe = await pipeline('feature-extraction', 'ryandono/osgrep-coderank-q8');