How to use from
SGLang
# Gated model: Login with a HF token with gated access permission
hf auth login
Install from pip and serve model
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
    --model-path "utdanningno/noredu-llama3.1-it2" \
    --host 0.0.0.0 \
    --port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "utdanningno/noredu-llama3.1-it2",
		"prompt": "Once upon a time,",
		"max_tokens": 512,
		"temperature": 0.5
	}'
Use Docker images
docker run --gpus all \
    --shm-size 32g \
    -p 30000:30000 \
    -v ~/.cache/huggingface:/root/.cache/huggingface \
    --env "HF_TOKEN=<secret>" \
    --ipc=host \
    lmsysorg/sglang:latest \
    python3 -m sglang.launch_server \
        --model-path "utdanningno/noredu-llama3.1-it2" \
        --host 0.0.0.0 \
        --port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "utdanningno/noredu-llama3.1-it2",
		"prompt": "Once upon a time,",
		"max_tokens": 512,
		"temperature": 0.5
	}'
Quick Links

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

output2

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using meta-llama/Meta-Llama-3.1-8B as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: MagnusSa/Llama-3.1-8B-UtdanningSNL-Instruct
    parameters:
      density: 0.53
      weight: 0.4 
  - model:  meta-llama/Meta-Llama-3.1-8B-Instruct
    parameters:
      density: 0.53 
      weight: 0.4
  - model: meta-llama/Meta-Llama-3.1-8B
    parameters:
      density: 0.53
      weight: 0.2 
merge_method: dare_ties
base_model: meta-llama/Meta-Llama-3.1-8B
parameters:
  normalize: true
  int8_mask: true
dtype: float16
Downloads last month
-
Safetensors
Model size
8B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for utdanningno/noredu-llama3.1-it2

Papers for utdanningno/noredu-llama3.1-it2