Instructions to use Data-Selection/PDS-1B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Data-Selection/PDS-1B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Data-Selection/PDS-1B")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Data-Selection/PDS-1B") model = AutoModelForCausalLM.from_pretrained("Data-Selection/PDS-1B") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Data-Selection/PDS-1B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Data-Selection/PDS-1B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Data-Selection/PDS-1B", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/Data-Selection/PDS-1B
- SGLang
How to use Data-Selection/PDS-1B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Data-Selection/PDS-1B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Data-Selection/PDS-1B", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Data-Selection/PDS-1B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Data-Selection/PDS-1B", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use Data-Selection/PDS-1B with Docker Model Runner:
docker model run hf.co/Data-Selection/PDS-1B
PDS-1B
PDS-1B is a 1B model with Mistral achitecture pre-trained from scratch on the data selected from the CC split of Redpajama, using the PDS framework.
The PDS framework is based on the Pontryagin's maximum principle for optimal pre-training data selection, which not only enjoy strong theoretical support but is also scalable for training large language models.
Please refer to our paper for more details.
Overview of the theory:
Overview of the PDS framework:
Evaluation
PDS-selected data improves the performance of language models pre-trained from scratch and saves pre-training comptation. The improvement scales up to large model sizes.
Baseline
Citation
@article{gu2024data,
title={Data Selection via Optimal Control for Language Models},
author={Gu, Yuxian and Dong, Li and Wang, Hongning and Hao, Yaru and Dong, Qingxiu and Wei, Furu and Huang, Minlie},
journal={arXiv preprint arXiv:2410.07064},
year={2024}
}
- Downloads last month
- 11
docker model run hf.co/Data-Selection/PDS-1B