Instructions to use UsernameJustAnother/Nemo-12B-Marlin-v6 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use UsernameJustAnother/Nemo-12B-Marlin-v6 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="UsernameJustAnother/Nemo-12B-Marlin-v6") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("UsernameJustAnother/Nemo-12B-Marlin-v6") model = AutoModelForCausalLM.from_pretrained("UsernameJustAnother/Nemo-12B-Marlin-v6") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use UsernameJustAnother/Nemo-12B-Marlin-v6 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "UsernameJustAnother/Nemo-12B-Marlin-v6" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "UsernameJustAnother/Nemo-12B-Marlin-v6", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/UsernameJustAnother/Nemo-12B-Marlin-v6
- SGLang
How to use UsernameJustAnother/Nemo-12B-Marlin-v6 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "UsernameJustAnother/Nemo-12B-Marlin-v6" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "UsernameJustAnother/Nemo-12B-Marlin-v6", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "UsernameJustAnother/Nemo-12B-Marlin-v6" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "UsernameJustAnother/Nemo-12B-Marlin-v6", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Unsloth Studio new
How to use UsernameJustAnother/Nemo-12B-Marlin-v6 with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for UsernameJustAnother/Nemo-12B-Marlin-v6 to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for UsernameJustAnother/Nemo-12B-Marlin-v6 to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for UsernameJustAnother/Nemo-12B-Marlin-v6 to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="UsernameJustAnother/Nemo-12B-Marlin-v6", max_seq_length=2048, ) - Docker Model Runner
How to use UsernameJustAnother/Nemo-12B-Marlin-v6 with Docker Model Runner:
docker model run hf.co/UsernameJustAnother/Nemo-12B-Marlin-v6
Uploaded model
- Developed by: UsernameJustAnother
- License: apache-2.0
- Finetuned from model : unsloth/Mistral-Nemo-Instruct-2407
Standard disclaimer: This is me teaching myself the basics of fine-tuning, with notes extensively borrowed from https://huggingface.co/nothingiisreal/MN-12B-Celeste-V1.9
New for v6:
- Slightly different source mix. Down to 8,000 records of mostly-human convos and stories, curated by me, trained in ChatML.
- The stories have been edited to remove author's notes, and the RP chats tweaked to remove many ministrations.
- Different learning rate and back to Celeste's scaling factor setup (but Celeste trained on -base, this is -instruct).
- Now with added eval! I worked out how to get eval stats (and wandb) set up, so now I can see my failures in graphical form.
I pulled v7 because I honestly don't think it's as good as v6, and don't want folks to get the wrong idea that it's better just because the version number is higher.
And of course yay Unsloth for letting this all train on a single A100 with variable (wildly variable) context length.
Here's what the train/eval loss looked like (eval is orange, train is blue). I think that's not terrible, but :shrug:.
It was trained with the following settings:
model = FastLanguageModel.get_peft_model(
model,
r = 256,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 128, # 128 / sqrt(256) gives a scaling factor of 8
lora_dropout = 0.1, # Supports any, but = 0 is optimized
bias = "none", # Supports any, but = "none" is optimized
# [NEW] "unsloth" uses 30% less VRAM, fits 2x larger batch sizes!
use_gradient_checkpointing = "unsloth", # True or "unsloth" for very long context
random_state = 3407,
use_rslora = True, # setting the adapter scaling factor to lora_alpha/math.sqrt(r) instead of lora_alpha/r
loftq_config = None, # And LoftQ
)
lr_scheduler_kwargs = {
'min_lr': 0.0000024 # Adjust this value as needed
}
per_device_train_batch_size = 2,
per_device_eval_batch_size = 2, # defaults to 8!
gradient_accumulation_steps = 4,
eval_accumulation_steps = 4,
prediction_loss_only = True, # When performing evaluation and generating predictions, only returns the loss.
warmup_steps = 50,
num_train_epochs = 2, # For longer training runs! 12 hrs/epoch?
learning_rate = 1e-5, # 8e-5 used by Celeste, 0.0001 is from the paper, halving it. tried 5e-5, now 1e-5.
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
fp16_full_eval = True, # stops eval from trying to use fp32
eval_strategy = "steps", # 'no', 'steps', 'epoch'. Don't use this without an eval dataset etc
eval_steps = 100, # is eval_strat is set to 'steps', do every N steps.
logging_steps = 5, # so eval and logging happen on the same schedule
optim = "adamw_8bit", #
weight_decay = 0, # up from 0
lr_scheduler_type = "cosine_with_min_lr", # linear, cosine, cosine_with_min_lr, default linear
lr_scheduler_kwargs = lr_scheduler_kwargs, # needed for cosine_with_min_lr
seed = 3407,
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
- Downloads last month
- 17

