Instructions to use cssupport/t5-small-awesome-text-to-sql with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use cssupport/t5-small-awesome-text-to-sql with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="cssupport/t5-small-awesome-text-to-sql")# Load model directly from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("cssupport/t5-small-awesome-text-to-sql") model = AutoModelForSeq2SeqLM.from_pretrained("cssupport/t5-small-awesome-text-to-sql") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use cssupport/t5-small-awesome-text-to-sql with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "cssupport/t5-small-awesome-text-to-sql" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "cssupport/t5-small-awesome-text-to-sql", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/cssupport/t5-small-awesome-text-to-sql
- SGLang
How to use cssupport/t5-small-awesome-text-to-sql with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "cssupport/t5-small-awesome-text-to-sql" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "cssupport/t5-small-awesome-text-to-sql", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "cssupport/t5-small-awesome-text-to-sql" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "cssupport/t5-small-awesome-text-to-sql", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use cssupport/t5-small-awesome-text-to-sql with Docker Model Runner:
docker model run hf.co/cssupport/t5-small-awesome-text-to-sql
models outputs seems to be truncated
Thanks a lot for this model! By far the smallest text2sql model on hf.
The output sql seems to be truncated though:
for the student example the output is "SELECT student_id FROM students WHERE NOT student_id IN ("
also see that for other examples too.
any thoughts on what could be going wrong?
Are you trying to run it on hugging face Model card? Yeah the hugging face is truncating the output.
If you download the model as in example on the model card, it should truncate the output, if it is doing it, let me know.