Text Generation
PEFT
Safetensors
Transformers
llama
axolotl
lora
conversational
text-generation-inference
Instructions to use darwinkernelpanic/luau-codellama-7b-reasoning with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use darwinkernelpanic/luau-codellama-7b-reasoning with PEFT:
from peft import PeftModel from transformers import AutoModelForCausalLM base_model = AutoModelForCausalLM.from_pretrained("codellama/CodeLlama-7b-hf") model = PeftModel.from_pretrained(base_model, "darwinkernelpanic/luau-codellama-7b-reasoning") - Transformers
How to use darwinkernelpanic/luau-codellama-7b-reasoning with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="darwinkernelpanic/luau-codellama-7b-reasoning") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("darwinkernelpanic/luau-codellama-7b-reasoning") model = AutoModelForCausalLM.from_pretrained("darwinkernelpanic/luau-codellama-7b-reasoning") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use darwinkernelpanic/luau-codellama-7b-reasoning with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "darwinkernelpanic/luau-codellama-7b-reasoning" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "darwinkernelpanic/luau-codellama-7b-reasoning", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/darwinkernelpanic/luau-codellama-7b-reasoning
- SGLang
How to use darwinkernelpanic/luau-codellama-7b-reasoning with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "darwinkernelpanic/luau-codellama-7b-reasoning" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "darwinkernelpanic/luau-codellama-7b-reasoning", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "darwinkernelpanic/luau-codellama-7b-reasoning" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "darwinkernelpanic/luau-codellama-7b-reasoning", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use darwinkernelpanic/luau-codellama-7b-reasoning with Docker Model Runner:
docker model run hf.co/darwinkernelpanic/luau-codellama-7b-reasoning
| library_name: peft | |
| license: llama2 | |
| base_model: codellama/CodeLlama-7b-hf | |
| tags: | |
| - axolotl | |
| - base_model:adapter:codellama/CodeLlama-7b-hf | |
| - lora | |
| - transformers | |
| datasets: | |
| - darwinkernelpanic/luau-reasoning-normalized | |
| pipeline_tag: text-generation | |
| model-index: | |
| - name: outputs/luau-codellama-h200-fast | |
| results: [] | |
| <!-- This model card has been generated automatically according to the information the Trainer had access to. You | |
| should probably proofread and complete it, then remove this comment. --> | |
| [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) | |
| <details><summary>See axolotl config</summary> | |
| axolotl version: `0.13.0.dev0` | |
| ```yaml | |
| base_model: codellama/CodeLlama-7b-hf | |
| model_type: LlamaForCausalLM | |
| tokenizer_type: LlamaTokenizer | |
| # Keep full precision weights (fast on Hopper) | |
| load_in_8bit: false | |
| load_in_4bit: false | |
| strict: false | |
| chat_template: llama3 | |
| datasets: | |
| - path: darwinkernelpanic/luau-reasoning-normalized | |
| type: chat_template | |
| conversation: llama3 | |
| field_messages: messages | |
| add_generation_prompt: true | |
| # Preprocessing workers (CPU). Fine as-is. | |
| num_proc: 16 | |
| output_dir: ./outputs/luau-codellama-h200-fast | |
| # ===== LoRA ===== | |
| adapter: lora | |
| lora_r: 16 | |
| lora_alpha: 32 | |
| lora_dropout: 0.05 | |
| lora_target_modules: | |
| - q_proj | |
| - k_proj | |
| - v_proj | |
| - o_proj | |
| # ===== Precision ===== | |
| bf16: true | |
| fp16: false | |
| tf32: true | |
| # ===== Sequence / batching ===== | |
| sequence_len: 4096 | |
| # Keep packing for throughput, but enable length grouping to cut padding | |
| sample_packing: true | |
| group_by_length: true | |
| # Lower micro-batch a bit to kill peak VRAM while staying fast | |
| micro_batch_size: 5 | |
| gradient_accumulation_steps: 1 | |
| # ===== Training ===== | |
| num_epochs: 3 | |
| optimizer: adamw_torch | |
| learning_rate: 2e-4 | |
| lr_scheduler_type: cosine | |
| warmup_steps: 100 | |
| train_on_inputs: false | |
| # Turn on checkpointing — tiny speed hit, big memory win | |
| gradient_checkpointing: true | |
| gradient_clipping: 1.0 | |
| # ===== Dataloader ===== | |
| # Keep pin_memory, but avoid too many loader workers in Accelerate | |
| dataloader_num_workers: 2 | |
| dataloader_pin_memory: true | |
| # Optional: avoid insanely large host->device prefetch | |
| # dataloader_prefetch_factor: 2 | |
| # ===== Logging / eval ===== | |
| logging_steps: 25 | |
| val_set_size: 0.05 | |
| # Reduce eval/save frequency to avoid spikes | |
| eval_steps: 1000 | |
| save_strategy: steps | |
| save_steps: 1000 | |
| save_total_limit: 3 | |
| seed: 42 | |
| # ===== DeepSpeed ===== | |
| # Off for single H200 — overhead not worth it for 7B | |
| ``` | |
| </details><br> | |
| # outputs/luau-codellama-h200-fast | |
| This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the darwinkernelpanic/luau-reasoning-normalized dataset. | |
| It achieves the following results on the evaluation set: | |
| - Loss: 0.4927 | |
| - Ppl: 1.6368 | |
| - Memory/max Active (gib): 19.1 | |
| - Memory/max Allocated (gib): 19.1 | |
| - Memory/device Reserved (gib): 139.06 | |
| ## Model description | |
| More information needed | |
| ## Intended uses & limitations | |
| More information needed | |
| ## Training and evaluation data | |
| More information needed | |
| ## Training procedure | |
| ### Training hyperparameters | |
| The following hyperparameters were used during training: | |
| - learning_rate: 0.0002 | |
| - train_batch_size: 5 | |
| - eval_batch_size: 5 | |
| - seed: 42 | |
| - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments | |
| - lr_scheduler_type: cosine | |
| - lr_scheduler_warmup_steps: 100 | |
| - training_steps: 3996 | |
| ### Training results | |
| | Training Loss | Epoch | Step | Validation Loss | Ppl | Active (gib) | Allocated (gib) | Reserved (gib) | | |
| |:-------------:|:------:|:----:|:---------------:|:------:|:------------:|:---------------:|:--------------:| | |
| | No log | 0 | 0 | 1.6888 | 5.4129 | 18.94 | 18.94 | 139.12 | | |
| | 0.5511 | 0.7502 | 1000 | 0.5410 | 1.7177 | 19.1 | 19.1 | 139.02 | | |
| | 0.5052 | 1.5004 | 2000 | 0.5064 | 1.6593 | 19.1 | 19.1 | 139.06 | | |
| | 0.4733 | 2.2506 | 3000 | 0.4927 | 1.6368 | 19.1 | 19.1 | 139.06 | | |
| ### Framework versions | |
| - PEFT 0.18.0 | |
| - Transformers 4.57.1 | |
| - Pytorch 2.8.0+cu128 | |
| - Datasets 4.4.1 | |
| - Tokenizers 0.22.1 |