Instructions to use UsernameJustAnother/Nemo-12B-Marlin-v5 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use UsernameJustAnother/Nemo-12B-Marlin-v5 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="UsernameJustAnother/Nemo-12B-Marlin-v5") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("UsernameJustAnother/Nemo-12B-Marlin-v5") model = AutoModelForCausalLM.from_pretrained("UsernameJustAnother/Nemo-12B-Marlin-v5") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use UsernameJustAnother/Nemo-12B-Marlin-v5 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "UsernameJustAnother/Nemo-12B-Marlin-v5" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "UsernameJustAnother/Nemo-12B-Marlin-v5", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/UsernameJustAnother/Nemo-12B-Marlin-v5
- SGLang
How to use UsernameJustAnother/Nemo-12B-Marlin-v5 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "UsernameJustAnother/Nemo-12B-Marlin-v5" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "UsernameJustAnother/Nemo-12B-Marlin-v5", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "UsernameJustAnother/Nemo-12B-Marlin-v5" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "UsernameJustAnother/Nemo-12B-Marlin-v5", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Unsloth Studio new
How to use UsernameJustAnother/Nemo-12B-Marlin-v5 with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for UsernameJustAnother/Nemo-12B-Marlin-v5 to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for UsernameJustAnother/Nemo-12B-Marlin-v5 to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for UsernameJustAnother/Nemo-12B-Marlin-v5 to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="UsernameJustAnother/Nemo-12B-Marlin-v5", max_seq_length=2048, ) - Docker Model Runner
How to use UsernameJustAnother/Nemo-12B-Marlin-v5 with Docker Model Runner:
docker model run hf.co/UsernameJustAnother/Nemo-12B-Marlin-v5
Uploaded model
- Developed by: UsernameJustAnother
- License: apache-2.0
- Finetuned from model : unsloth/Mistral-Nemo-Instruct-2407
I am a terrible liar. I came across another dataset I had to use, and this is the result. Still experimental, as I made these to teach myself the basics of fine-tuning, with notes extensively borrowed from https://huggingface.co/nothingiisreal/MN-12B-Celeste-V1.9
It is an RP finetune using 10,801 human-generated conversations of varying lengths from a variety of sources and curated by me, trained in ChatML format.
The big differences from Celeste is a different LoRA scaling factor. Celeste uses 8; I did several tests with this data before concluding I got lower training loss with 2.
Training took around 5 hours on a single Colab A100 (but I didn't do an eval loop). Neat that I could get it all to fit into 40GB of vRAM thanks to Unsloth.
It was trained with the following settings:
==((====))== Unsloth - 2x faster free finetuning | Num GPUs = 1
\\ /| Num examples = 10,801 | Num Epochs = 2
O^O/ \_/ \ Batch size per device = 2 | Gradient Accumulation steps = 4
\ / Total batch size = 8 | Total steps = 2,700
"-____-" Number of trainable parameters = 912,261,120
[ 14/2700 01:20 < 4:59:21, 0.15 it/s, Epoch 0.01/2]
[2040/2040 3:35:30, Epoch 2/2]
model = FastLanguageModel.get_peft_model(
model,
r = 256,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 32, # 32 / sqrt(256) gives a scaling factor of 2
lora_dropout = 0, # Supports any, but = 0 is optimized
bias = "none", # Supports any, but = "none" is optimized
# [NEW] "unsloth" uses 30% less VRAM, fits 2x larger batch sizes!
use_gradient_checkpointing = "unsloth", # True or "unsloth" for very long context
random_state = 3407,
use_rslora = True, # setting the adapter scaling factor to lora_alpha/math.sqrt(r) instead of lora_alpha/r
loftq_config = None, # And LoftQ
)
lr_scheduler_kwargs = {
'min_lr': 0.0000024 # Adjust this value as needed
}
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = train_ds,
compute_metrics = compute_metrics,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 2,
packing = False, # Can make training 5x faster for short sequences.
args = TrainingArguments(
per_device_train_batch_size = 2,
per_device_eval_batch_size = 2, # defaults to 8!
gradient_accumulation_steps = 4,
warmup_steps = 5,
num_train_epochs = 2,
learning_rate = 8e-5,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
fp16_full_eval = True, # stops eval from trying to use fp32
eval_strategy = "no", # 'no', 'steps', 'epoch'. Don't use this without an eval dataset etc
eval_steps = 1, # is eval_strat is set to 'steps', do every N steps.
logging_steps = 1, # so eval and logging happen on the same schedule
optim = "adamw_8bit",
weight_decay = 0.01,
lr_scheduler_type = "cosine_with_min_lr", # linear, cosine, cosine_with_min_lr, default linear
lr_scheduler_kwargs = lr_scheduler_kwargs, # needed for cosine_with_min_lr
seed = 3407,
output_dir = "outputs",
),
)
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
- Downloads last month
- 64
Model tree for UsernameJustAnother/Nemo-12B-Marlin-v5
Base model
unsloth/Mistral-Nemo-Instruct-2407