Transformers
PyTorch
English
Chinese
bart
text2text-generation
GENIUS
conditional text generation
sketch-based text generation
data augmentation
Instructions to use beyond/genius-base with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use beyond/genius-base with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("beyond/genius-base") model = AutoModelForSeq2SeqLM.from_pretrained("beyond/genius-base") - Notebooks
- Google Colab
- Kaggle
File size: 434 Bytes
b61de2a | 1 | {"errors": "replace", "bos_token": "<s>", "eos_token": "</s>", "sep_token": "</s>", "cls_token": "<s>", "unk_token": "<unk>", "pad_token": "<pad>", "mask_token": "<mask>", "add_prefix_space": false, "trim_offsets": true, "model_max_length": 1024, "name_or_path": "../saved_models/bart-base-c4-realnewslike-4templates-passage-max15sents_2-sketch4/checkpoint-129375", "special_tokens_map_file": null, "tokenizer_class": "BartTokenizer"} |