Text Generation
Transformers
PyTorch
TensorFlow
code
gpt2
Code
GPyT
code generator
text-generation-inference
Instructions to use Sentdex/GPyT with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Sentdex/GPyT with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Sentdex/GPyT")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Sentdex/GPyT") model = AutoModelForCausalLM.from_pretrained("Sentdex/GPyT") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Sentdex/GPyT with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Sentdex/GPyT" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Sentdex/GPyT", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/Sentdex/GPyT
- SGLang
How to use Sentdex/GPyT with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Sentdex/GPyT" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Sentdex/GPyT", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Sentdex/GPyT" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Sentdex/GPyT", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use Sentdex/GPyT with Docker Model Runner:
docker model run hf.co/Sentdex/GPyT
| language: code | |
| license: mit | |
| tags: | |
| - Code | |
| - GPyT | |
| - code generator | |
| GPyT is a GPT2 model trained from scratch (not fine tuned) on Python code from Github. Overall, it was ~80GB of pure Python code, the current GPyT model is a mere 2 epochs through this data, so it may benefit greatly from continued training and/or fine-tuning. | |
| Newlines are replaced by `<N>` | |
| Input to the model is code, up to the context length of 1024, with newlines replaced by `<N>` | |
| Here's a quick example of using this model: | |
| ```py | |
| from transformers import AutoTokenizer, AutoModelWithLMHead | |
| tokenizer = AutoTokenizer.from_pretrained("Sentdex/GPyT") | |
| model = AutoModelWithLMHead.from_pretrained("Sentdex/GPyT") | |
| # copy and paste some code in here | |
| inp = """import""" | |
| newlinechar = "<N>" | |
| converted = inp.replace("\n", newlinechar) | |
| tokenized = tokenizer.encode(converted, return_tensors='pt') | |
| resp = model.generate(tokenized) | |
| decoded = tokenizer.decode(resp[0]) | |
| reformatted = decoded.replace("<N>","\n") | |
| print(reformatted) | |
| ``` | |
| Should produce: | |
| ``` | |
| import numpy as np | |
| import pytest | |
| import pandas as pd<N | |
| ``` | |
| This model does a ton more than just imports, however. For a bunch of examples and a better understanding of the model's capabilities: | |
| https://pythonprogramming.net/GPT-python-code-transformer-model-GPyT/ | |
| Considerations: | |
| 1. This model is intended for educational and research use only. Do not trust model outputs. | |
| 2. Model is highly likely to regurgitate code almost exactly as it saw it. It's up to you to determine licensing if you intend to actually use the generated code. | |
| 3. All Python code was blindly pulled from github. This means included code is both Python 2 and 3, among other more subtle differences, such as tabs being 2 spaces in some cases and 4 in others...and more non-homologous things. | |
| 4. Along with the above, this means the code generated could wind up doing or suggesting just about anything. Run the generated code at own risk...it could be *anything* | |