How to use from
Unsloth Studio
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh
# Run unsloth studio
unsloth studio -H 0.0.0.0 -p 8888
# Then open http://localhost:8888 in your browser
# Search for NextGLab/ORANSight_Phi_Mini_Instruct to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex
# Run unsloth studio
unsloth studio -H 0.0.0.0 -p 8888
# Then open http://localhost:8888 in your browser
# Search for NextGLab/ORANSight_Phi_Mini_Instruct to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required
# Open https://huggingface.co/spaces/unsloth/studio in your browser
# Search for NextGLab/ORANSight_Phi_Mini_Instruct to start chatting
Load model with FastModel
pip install unsloth
from unsloth import FastModel
model, tokenizer = FastModel.from_pretrained(
    model_name="NextGLab/ORANSight_Phi_Mini_Instruct",
    max_seq_length=2048,
)
Quick Links

Model Card for ORANSight Phi-Mini

This model belongs to the first release of the ORANSight family of models.

  • Developed by: NextG lab@ NC State
  • License: MIT
  • Context Window: 128K
  • Fine Tuning Framework: Unsloth

Generate with Transformers

Below is a quick example of how to use the model with Hugging Face Transformers:

from transformers import pipeline

# Example query
messages = [
    {"role": "system", "content": "You are an O-RAN expert assistant."},
    {"role": "user", "content": "Explain the E2 interface."},
]

# Load the model
chatbot = pipeline("text-generation", model="NextGLab/ORANSight_Phi_Mini_Instruct")
result = chatbot(messages)
print(result)

Coming Soon

A detailed paper documenting the experiments and results achieved with this model will be available soon. Meanwhile, if you try this model, please cite the below mentioned paper to acknowledge the foundational work that enabled this fine-tuning.

@article{gajjar2024oran,
  title={Oran-bench-13k: An open source benchmark for assessing llms in open radio access networks},
  author={Gajjar, Pranshav and Shah, Vijay K},
  journal={arXiv preprint arXiv:2407.06245},
  year={2024}
}

Downloads last month
2
Safetensors
Model size
4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for NextGLab/ORANSight_Phi_Mini_Instruct

Finetuned
(34)
this model
Quantizations
1 model

Collection including NextGLab/ORANSight_Phi_Mini_Instruct

Paper for NextGLab/ORANSight_Phi_Mini_Instruct