Instructions to use digitous/Janin-R with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use digitous/Janin-R with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="digitous/Janin-R")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("digitous/Janin-R") model = AutoModelForCausalLM.from_pretrained("digitous/Janin-R") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use digitous/Janin-R with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "digitous/Janin-R" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "digitous/Janin-R", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/digitous/Janin-R
- SGLang
How to use digitous/Janin-R with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "digitous/Janin-R" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "digitous/Janin-R", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "digitous/Janin-R" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "digitous/Janin-R", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use digitous/Janin-R with Docker Model Runner:
docker model run hf.co/digitous/Janin-R
| { | |
| "metadata": { | |
| "total_size": 12219206136 | |
| }, | |
| "weight_map": { | |
| "lm_head.bias": "pytorch_model-00006-of-00006.bin", | |
| "lm_head.weight": "pytorch_model-00006-of-00006.bin", | |
| "transformer.h.0.attn.bias": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.0.attn.k_proj.weight": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.0.attn.masked_bias": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.0.attn.out_proj.weight": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.0.attn.q_proj.weight": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.0.attn.v_proj.weight": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.0.ln_1.bias": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.0.ln_1.weight": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.0.mlp.fc_in.bias": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.0.mlp.fc_in.weight": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.0.mlp.fc_out.bias": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.0.mlp.fc_out.weight": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.1.attn.bias": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.1.attn.k_proj.weight": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.1.attn.masked_bias": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.1.attn.out_proj.weight": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.1.attn.q_proj.weight": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.1.attn.v_proj.weight": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.1.ln_1.bias": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.1.ln_1.weight": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.1.mlp.fc_in.bias": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.1.mlp.fc_in.weight": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.1.mlp.fc_out.bias": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.1.mlp.fc_out.weight": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.10.attn.bias": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.10.attn.k_proj.weight": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.10.attn.masked_bias": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.10.attn.out_proj.weight": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.10.attn.q_proj.weight": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.10.attn.v_proj.weight": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.10.ln_1.bias": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.10.ln_1.weight": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.10.mlp.fc_in.bias": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.10.mlp.fc_in.weight": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.10.mlp.fc_out.bias": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.10.mlp.fc_out.weight": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.11.attn.bias": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.11.attn.k_proj.weight": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.11.attn.masked_bias": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.11.attn.out_proj.weight": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.11.attn.q_proj.weight": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.11.attn.v_proj.weight": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.11.ln_1.bias": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.11.ln_1.weight": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.11.mlp.fc_in.bias": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.11.mlp.fc_in.weight": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.11.mlp.fc_out.bias": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.11.mlp.fc_out.weight": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.12.attn.bias": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.12.attn.k_proj.weight": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.12.attn.masked_bias": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.12.attn.out_proj.weight": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.12.attn.q_proj.weight": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.12.attn.v_proj.weight": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.12.ln_1.bias": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.12.ln_1.weight": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.12.mlp.fc_in.bias": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.12.mlp.fc_in.weight": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.12.mlp.fc_out.bias": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.12.mlp.fc_out.weight": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.13.attn.bias": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.13.attn.k_proj.weight": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.13.attn.masked_bias": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.13.attn.out_proj.weight": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.13.attn.q_proj.weight": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.13.attn.v_proj.weight": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.13.ln_1.bias": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.13.ln_1.weight": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.13.mlp.fc_in.bias": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.13.mlp.fc_in.weight": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.13.mlp.fc_out.bias": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.13.mlp.fc_out.weight": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.14.attn.bias": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.14.attn.k_proj.weight": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.14.attn.masked_bias": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.14.attn.out_proj.weight": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.14.attn.q_proj.weight": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.14.attn.v_proj.weight": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.14.ln_1.bias": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.14.ln_1.weight": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.14.mlp.fc_in.bias": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.14.mlp.fc_in.weight": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.14.mlp.fc_out.bias": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.14.mlp.fc_out.weight": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.15.attn.bias": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.15.attn.k_proj.weight": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.15.attn.masked_bias": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.15.attn.out_proj.weight": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.15.attn.q_proj.weight": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.15.attn.v_proj.weight": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.15.ln_1.bias": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.15.ln_1.weight": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.15.mlp.fc_in.bias": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.15.mlp.fc_in.weight": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.15.mlp.fc_out.bias": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.15.mlp.fc_out.weight": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.16.attn.bias": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.16.attn.k_proj.weight": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.16.attn.masked_bias": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.16.attn.out_proj.weight": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.16.attn.q_proj.weight": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.16.attn.v_proj.weight": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.16.ln_1.bias": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.16.ln_1.weight": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.16.mlp.fc_in.bias": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.16.mlp.fc_in.weight": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.16.mlp.fc_out.bias": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.16.mlp.fc_out.weight": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.17.attn.bias": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.17.attn.k_proj.weight": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.17.attn.masked_bias": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.17.attn.out_proj.weight": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.17.attn.q_proj.weight": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.17.attn.v_proj.weight": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.17.ln_1.bias": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.17.ln_1.weight": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.17.mlp.fc_in.bias": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.17.mlp.fc_in.weight": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.17.mlp.fc_out.bias": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.17.mlp.fc_out.weight": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.18.attn.bias": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.18.attn.k_proj.weight": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.18.attn.masked_bias": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.18.attn.out_proj.weight": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.18.attn.q_proj.weight": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.18.attn.v_proj.weight": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.18.ln_1.bias": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.18.ln_1.weight": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.18.mlp.fc_in.bias": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.18.mlp.fc_in.weight": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.18.mlp.fc_out.bias": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.18.mlp.fc_out.weight": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.19.attn.bias": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.19.attn.k_proj.weight": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.19.attn.masked_bias": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.19.attn.out_proj.weight": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.19.attn.q_proj.weight": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.19.attn.v_proj.weight": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.19.ln_1.bias": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.19.ln_1.weight": "pytorch_model-00004-of-00006.bin", | |
| "transformer.h.19.mlp.fc_in.bias": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.19.mlp.fc_in.weight": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.19.mlp.fc_out.bias": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.19.mlp.fc_out.weight": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.2.attn.bias": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.2.attn.k_proj.weight": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.2.attn.masked_bias": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.2.attn.out_proj.weight": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.2.attn.q_proj.weight": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.2.attn.v_proj.weight": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.2.ln_1.bias": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.2.ln_1.weight": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.2.mlp.fc_in.bias": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.2.mlp.fc_in.weight": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.2.mlp.fc_out.bias": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.2.mlp.fc_out.weight": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.20.attn.bias": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.20.attn.k_proj.weight": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.20.attn.masked_bias": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.20.attn.out_proj.weight": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.20.attn.q_proj.weight": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.20.attn.v_proj.weight": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.20.ln_1.bias": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.20.ln_1.weight": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.20.mlp.fc_in.bias": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.20.mlp.fc_in.weight": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.20.mlp.fc_out.bias": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.20.mlp.fc_out.weight": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.21.attn.bias": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.21.attn.k_proj.weight": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.21.attn.masked_bias": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.21.attn.out_proj.weight": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.21.attn.q_proj.weight": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.21.attn.v_proj.weight": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.21.ln_1.bias": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.21.ln_1.weight": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.21.mlp.fc_in.bias": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.21.mlp.fc_in.weight": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.21.mlp.fc_out.bias": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.21.mlp.fc_out.weight": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.22.attn.bias": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.22.attn.k_proj.weight": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.22.attn.masked_bias": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.22.attn.out_proj.weight": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.22.attn.q_proj.weight": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.22.attn.v_proj.weight": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.22.ln_1.bias": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.22.ln_1.weight": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.22.mlp.fc_in.bias": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.22.mlp.fc_in.weight": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.22.mlp.fc_out.bias": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.22.mlp.fc_out.weight": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.23.attn.bias": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.23.attn.k_proj.weight": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.23.attn.masked_bias": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.23.attn.out_proj.weight": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.23.attn.q_proj.weight": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.23.attn.v_proj.weight": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.23.ln_1.bias": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.23.ln_1.weight": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.23.mlp.fc_in.bias": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.23.mlp.fc_in.weight": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.23.mlp.fc_out.bias": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.23.mlp.fc_out.weight": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.24.attn.bias": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.24.attn.k_proj.weight": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.24.attn.masked_bias": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.24.attn.out_proj.weight": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.24.attn.q_proj.weight": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.24.attn.v_proj.weight": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.24.ln_1.bias": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.24.ln_1.weight": "pytorch_model-00005-of-00006.bin", | |
| "transformer.h.24.mlp.fc_in.bias": "pytorch_model-00006-of-00006.bin", | |
| "transformer.h.24.mlp.fc_in.weight": "pytorch_model-00006-of-00006.bin", | |
| "transformer.h.24.mlp.fc_out.bias": "pytorch_model-00006-of-00006.bin", | |
| "transformer.h.24.mlp.fc_out.weight": "pytorch_model-00006-of-00006.bin", | |
| "transformer.h.25.attn.bias": "pytorch_model-00006-of-00006.bin", | |
| "transformer.h.25.attn.k_proj.weight": "pytorch_model-00006-of-00006.bin", | |
| "transformer.h.25.attn.masked_bias": "pytorch_model-00006-of-00006.bin", | |
| "transformer.h.25.attn.out_proj.weight": "pytorch_model-00006-of-00006.bin", | |
| "transformer.h.25.attn.q_proj.weight": "pytorch_model-00006-of-00006.bin", | |
| "transformer.h.25.attn.v_proj.weight": "pytorch_model-00006-of-00006.bin", | |
| "transformer.h.25.ln_1.bias": "pytorch_model-00006-of-00006.bin", | |
| "transformer.h.25.ln_1.weight": "pytorch_model-00006-of-00006.bin", | |
| "transformer.h.25.mlp.fc_in.bias": "pytorch_model-00006-of-00006.bin", | |
| "transformer.h.25.mlp.fc_in.weight": "pytorch_model-00006-of-00006.bin", | |
| "transformer.h.25.mlp.fc_out.bias": "pytorch_model-00006-of-00006.bin", | |
| "transformer.h.25.mlp.fc_out.weight": "pytorch_model-00006-of-00006.bin", | |
| "transformer.h.26.attn.bias": "pytorch_model-00006-of-00006.bin", | |
| "transformer.h.26.attn.k_proj.weight": "pytorch_model-00006-of-00006.bin", | |
| "transformer.h.26.attn.masked_bias": "pytorch_model-00006-of-00006.bin", | |
| "transformer.h.26.attn.out_proj.weight": "pytorch_model-00006-of-00006.bin", | |
| "transformer.h.26.attn.q_proj.weight": "pytorch_model-00006-of-00006.bin", | |
| "transformer.h.26.attn.v_proj.weight": "pytorch_model-00006-of-00006.bin", | |
| "transformer.h.26.ln_1.bias": "pytorch_model-00006-of-00006.bin", | |
| "transformer.h.26.ln_1.weight": "pytorch_model-00006-of-00006.bin", | |
| "transformer.h.26.mlp.fc_in.bias": "pytorch_model-00006-of-00006.bin", | |
| "transformer.h.26.mlp.fc_in.weight": "pytorch_model-00006-of-00006.bin", | |
| "transformer.h.26.mlp.fc_out.bias": "pytorch_model-00006-of-00006.bin", | |
| "transformer.h.26.mlp.fc_out.weight": "pytorch_model-00006-of-00006.bin", | |
| "transformer.h.27.attn.bias": "pytorch_model-00006-of-00006.bin", | |
| "transformer.h.27.attn.k_proj.weight": "pytorch_model-00006-of-00006.bin", | |
| "transformer.h.27.attn.masked_bias": "pytorch_model-00006-of-00006.bin", | |
| "transformer.h.27.attn.out_proj.weight": "pytorch_model-00006-of-00006.bin", | |
| "transformer.h.27.attn.q_proj.weight": "pytorch_model-00006-of-00006.bin", | |
| "transformer.h.27.attn.v_proj.weight": "pytorch_model-00006-of-00006.bin", | |
| "transformer.h.27.ln_1.bias": "pytorch_model-00006-of-00006.bin", | |
| "transformer.h.27.ln_1.weight": "pytorch_model-00006-of-00006.bin", | |
| "transformer.h.27.mlp.fc_in.bias": "pytorch_model-00006-of-00006.bin", | |
| "transformer.h.27.mlp.fc_in.weight": "pytorch_model-00006-of-00006.bin", | |
| "transformer.h.27.mlp.fc_out.bias": "pytorch_model-00006-of-00006.bin", | |
| "transformer.h.27.mlp.fc_out.weight": "pytorch_model-00006-of-00006.bin", | |
| "transformer.h.3.attn.bias": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.3.attn.k_proj.weight": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.3.attn.masked_bias": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.3.attn.out_proj.weight": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.3.attn.q_proj.weight": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.3.attn.v_proj.weight": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.3.ln_1.bias": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.3.ln_1.weight": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.3.mlp.fc_in.bias": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.3.mlp.fc_in.weight": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.3.mlp.fc_out.bias": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.3.mlp.fc_out.weight": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.4.attn.bias": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.4.attn.k_proj.weight": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.4.attn.masked_bias": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.4.attn.out_proj.weight": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.4.attn.q_proj.weight": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.4.attn.v_proj.weight": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.4.ln_1.bias": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.4.ln_1.weight": "pytorch_model-00001-of-00006.bin", | |
| "transformer.h.4.mlp.fc_in.bias": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.4.mlp.fc_in.weight": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.4.mlp.fc_out.bias": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.4.mlp.fc_out.weight": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.5.attn.bias": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.5.attn.k_proj.weight": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.5.attn.masked_bias": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.5.attn.out_proj.weight": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.5.attn.q_proj.weight": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.5.attn.v_proj.weight": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.5.ln_1.bias": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.5.ln_1.weight": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.5.mlp.fc_in.bias": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.5.mlp.fc_in.weight": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.5.mlp.fc_out.bias": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.5.mlp.fc_out.weight": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.6.attn.bias": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.6.attn.k_proj.weight": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.6.attn.masked_bias": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.6.attn.out_proj.weight": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.6.attn.q_proj.weight": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.6.attn.v_proj.weight": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.6.ln_1.bias": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.6.ln_1.weight": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.6.mlp.fc_in.bias": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.6.mlp.fc_in.weight": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.6.mlp.fc_out.bias": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.6.mlp.fc_out.weight": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.7.attn.bias": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.7.attn.k_proj.weight": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.7.attn.masked_bias": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.7.attn.out_proj.weight": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.7.attn.q_proj.weight": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.7.attn.v_proj.weight": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.7.ln_1.bias": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.7.ln_1.weight": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.7.mlp.fc_in.bias": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.7.mlp.fc_in.weight": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.7.mlp.fc_out.bias": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.7.mlp.fc_out.weight": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.8.attn.bias": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.8.attn.k_proj.weight": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.8.attn.masked_bias": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.8.attn.out_proj.weight": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.8.attn.q_proj.weight": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.8.attn.v_proj.weight": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.8.ln_1.bias": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.8.ln_1.weight": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.8.mlp.fc_in.bias": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.8.mlp.fc_in.weight": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.8.mlp.fc_out.bias": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.8.mlp.fc_out.weight": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.9.attn.bias": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.9.attn.k_proj.weight": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.9.attn.masked_bias": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.9.attn.out_proj.weight": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.9.attn.q_proj.weight": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.9.attn.v_proj.weight": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.9.ln_1.bias": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.9.ln_1.weight": "pytorch_model-00002-of-00006.bin", | |
| "transformer.h.9.mlp.fc_in.bias": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.9.mlp.fc_in.weight": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.9.mlp.fc_out.bias": "pytorch_model-00003-of-00006.bin", | |
| "transformer.h.9.mlp.fc_out.weight": "pytorch_model-00003-of-00006.bin", | |
| "transformer.ln_f.bias": "pytorch_model-00006-of-00006.bin", | |
| "transformer.ln_f.weight": "pytorch_model-00006-of-00006.bin", | |
| "transformer.wte.weight": "pytorch_model-00001-of-00006.bin" | |
| } | |
| } | |