ByT5: Towards a token-free future with pre-trained byte-to-byte models Paper • 2105.13626 • Published May 28, 2021 • 4
Khmer Text Synthetic Collection A collection of training data used for my CNN-Transformer • 2 items • Updated 18 days ago