Transformers
TensorBoard
Safetensors
encoder-decoder
text2text-generation
PROTAC
cheminformatics
Generated from Trainer
Instructions to use ailab-bio/PROTAC-Splitter-EncoderDecoder-lr_reduce-opt25 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use ailab-bio/PROTAC-Splitter-EncoderDecoder-lr_reduce-opt25 with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("ailab-bio/PROTAC-Splitter-EncoderDecoder-lr_reduce-opt25") model = AutoModelForSeq2SeqLM.from_pretrained("ailab-bio/PROTAC-Splitter-EncoderDecoder-lr_reduce-opt25") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 3bcaf71784317d9ab09de0dc8ad600a8835e3ed33d5e62c13b774aadb3b66b15
- Size of remote file:
- 815 MB
- SHA256:
- a1de088787c6562e2c047fc30aa615a3e94e3f8829a292311302687be6199d81
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.