Instructions to use openai/privacy-filter with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use openai/privacy-filter with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("token-classification", model="openai/privacy-filter")# Load model directly from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("openai/privacy-filter") model = AutoModelForTokenClassification.from_pretrained("openai/privacy-filter") - Transformers.js
How to use openai/privacy-filter with Transformers.js:
// npm i @huggingface/transformers import { pipeline } from '@huggingface/transformers'; // Allocate pipeline const pipe = await pipeline('token-classification', 'openai/privacy-filter'); - Inference
- Notebooks
- Google Colab
- Kaggle
Share onnx conversion script
#14
by jalbrethsen-highflame - opened
Can you share the script to convert to onnx, this is very useful for anyone finetuning this model as existing libraries do not support onnx conversion for this yet.
Should be supported by normal ONNX, its just a model in HF format.
We didn't run the conversion ourselves, but I can check.
Last I checked optimum does not support this yet (what I usually use for MoE), using standard ONNX converts it to a dense model and you lose the speed benefits.