How to use from
SGLang
Install from pip and serve model
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
    --model-path "NextGLab/ORANSight_Phi_Mini_Instruct" \
    --host 0.0.0.0 \
    --port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "NextGLab/ORANSight_Phi_Mini_Instruct",
		"messages": [
			{
				"role": "user",
				"content": "What is the capital of France?"
			}
		]
	}'
Use Docker images
docker run --gpus all \
    --shm-size 32g \
    -p 30000:30000 \
    -v ~/.cache/huggingface:/root/.cache/huggingface \
    --env "HF_TOKEN=<secret>" \
    --ipc=host \
    lmsysorg/sglang:latest \
    python3 -m sglang.launch_server \
        --model-path "NextGLab/ORANSight_Phi_Mini_Instruct" \
        --host 0.0.0.0 \
        --port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "NextGLab/ORANSight_Phi_Mini_Instruct",
		"messages": [
			{
				"role": "user",
				"content": "What is the capital of France?"
			}
		]
	}'
Quick Links

Model Card for ORANSight Phi-Mini

This model belongs to the first release of the ORANSight family of models.

  • Developed by: NextG lab@ NC State
  • License: MIT
  • Context Window: 128K
  • Fine Tuning Framework: Unsloth

Generate with Transformers

Below is a quick example of how to use the model with Hugging Face Transformers:

from transformers import pipeline

# Example query
messages = [
    {"role": "system", "content": "You are an O-RAN expert assistant."},
    {"role": "user", "content": "Explain the E2 interface."},
]

# Load the model
chatbot = pipeline("text-generation", model="NextGLab/ORANSight_Phi_Mini_Instruct")
result = chatbot(messages)
print(result)

Coming Soon

A detailed paper documenting the experiments and results achieved with this model will be available soon. Meanwhile, if you try this model, please cite the below mentioned paper to acknowledge the foundational work that enabled this fine-tuning.

@article{gajjar2024oran,
  title={Oran-bench-13k: An open source benchmark for assessing llms in open radio access networks},
  author={Gajjar, Pranshav and Shah, Vijay K},
  journal={arXiv preprint arXiv:2407.06245},
  year={2024}
}

Downloads last month
2
Safetensors
Model size
4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for NextGLab/ORANSight_Phi_Mini_Instruct

Finetuned
(34)
this model
Quantizations
1 model

Collection including NextGLab/ORANSight_Phi_Mini_Instruct

Paper for NextGLab/ORANSight_Phi_Mini_Instruct