Instructions to use alexjercan/codebert-base-buggy-token-classification with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use alexjercan/codebert-base-buggy-token-classification with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("token-classification", model="alexjercan/codebert-base-buggy-token-classification")# Load model directly from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("alexjercan/codebert-base-buggy-token-classification") model = AutoModelForTokenClassification.from_pretrained("alexjercan/codebert-base-buggy-token-classification") - Notebooks
- Google Colab
- Kaggle
| {"bos_token": "<s>", "eos_token": "</s>", "unk_token": "<unk>", "sep_token": "</s>", "pad_token": "<pad>", "cls_token": "<s>", "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": false}} |