Instructions to use goldfish-models/kaa_latn_full with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use goldfish-models/kaa_latn_full with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="goldfish-models/kaa_latn_full")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("goldfish-models/kaa_latn_full") model = AutoModelForCausalLM.from_pretrained("goldfish-models/kaa_latn_full") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use goldfish-models/kaa_latn_full with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "goldfish-models/kaa_latn_full" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "goldfish-models/kaa_latn_full", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/goldfish-models/kaa_latn_full
- SGLang
How to use goldfish-models/kaa_latn_full with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "goldfish-models/kaa_latn_full" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "goldfish-models/kaa_latn_full", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "goldfish-models/kaa_latn_full" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "goldfish-models/kaa_latn_full", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use goldfish-models/kaa_latn_full with Docker Model Runner:
docker model run hf.co/goldfish-models/kaa_latn_full
kaa_latn_full
Goldfish is a suite of monolingual language models trained for 350 languages. This model is the Kara-Kalpak (Latin script) model trained on 165MB of data (all our data in the language), after accounting for an estimated byte premium of 1.23; content-matched text in Kara-Kalpak takes on average 1.23x as many UTF-8 bytes to encode as English. The Goldfish models are trained primarily for comparability across languages and for low-resource languages; Goldfish performance for high-resource languages is not designed to be comparable with modern large language models (LLMs).
Note: This language is available in Goldfish with other scripts (writing systems). See: kaa_cyrl.
Note: kaa_latn is an individual language code. It is not contained in any macrolanguage codes contained in Goldfish (for script latn).
All training and hyperparameter details are in our paper, Goldfish: Monolingual Language Models for 350 Languages (Chang et al., 2024).
Training code and sample usage: https://github.com/tylerachang/goldfish
Sample usage also in this Google Colab: link
Model details:
To access all Goldfish model details programmatically, see https://github.com/tylerachang/goldfish/blob/main/model_details.json. All models are trained with a [CLS] (same as [BOS]) token prepended, and a [SEP] (same as [EOS]) token separating sequences. For best results, make sure that [CLS] is prepended to your input sequence (see sample usage linked above)! Details for this model specifically:
- Architecture: gpt2
- Parameters: 124770816
- Maximum sequence length: 512 tokens
- Training text data (raw): 202.78MB
- Training text data (byte premium scaled): 165.225MB
- Training tokens: 38767104 (x10 epochs)
- Vocabulary size: 50000
- Compute cost: 1.97850849214464e+17 FLOPs or ~18.7 NVIDIA A6000 GPU hours
Training datasets (percentages prior to deduplication):
- 97.52670%: MADLAD-400 (CommonCrawl)
- 1.65677%: Wikipedia 2023/08
- 0.81649%: Glot500, including Tatoeba, Wikipedia Hugging Face
- 0.00004%: Tatoeba
Citation
If you use this model, please cite:
@article{chang-etal-2024-goldfish,
title={Goldfish: Monolingual Language Models for 350 Languages},
author={Chang, Tyler A. and Arnett, Catherine and Tu, Zhuowen and Bergen, Benjamin K.},
journal={Preprint},
year={2024},
url={https://www.arxiv.org/abs/2408.10441},
}
- Downloads last month
- 7