Qwen3-4B-Islamic-Arabic-INT4

W4A16 INT4 quantized version of Qwen3-4B-Islamic-Arabic for fast vLLM serving — 2.5 GB.

This is a W4A16 (4-bit weights, 16-bit activations) quantized version of NightPrince/Qwen3-4B-Islamic-Arabic, produced using llm-compressor with the compressed-tensors format. The lm_head layer is kept in FP16 to preserve output quality.

At 2.5 GB, this variant fits comfortably on a single 11 GB GPU (RTX 2080 Ti, RTX 3080, etc.) and is the recommended choice for high-throughput production serving via vLLM.

Trained by Yahya Alnwsany (NightPrince) — 2026-05-05.


Model Variants

Variant Repo Description
Merged FP16 NightPrince/Qwen3-4B-Islamic-Arabic Canonical merged model, FP16, ~7.6 GB — drop-in for transformers or vLLM
LoRA Adapter NightPrince/Qwen3-4B-Islamic-Arabic-LoRA PEFT adapter only, 264 MB — apply on top of Qwen/Qwen3-4B
INT4 Quantized (this model) NightPrince/Qwen3-4B-Islamic-Arabic-INT4 W4A16 compressed-tensors for fast vLLM serving, 2.5 GB
MLX 4-bit NightPrince/Qwen3-4B-Islamic-Arabic-mlx-4Bit Apple Silicon / MLX — native Mac inference, 4-bit quantized
GGUF NightPrince/Qwen3-4B-Islamic-Arabic-GGUF llama.cpp / Ollama / LM Studio — Q4_K_M (2.3 GB), Q8_0 (4.0 GB), F16 (7.5 GB)
Dataset NightPrince/islamic-arabic-qa 17,944 train / 2,101 val / 1,042 test — Islamic Arabic Q&A pairs

Usage

vLLM Serving (Recommended)

# Install vLLM
pip install vllm

# Serve the INT4 model — fits on a single 11 GB GPU
vllm serve NightPrince/Qwen3-4B-Islamic-Arabic-INT4 \
    --quantization compressed-tensors \
    --dtype float16 \
    --enforce-eager \
    --max-model-len 4096 \
    --port 8000

--enforce-eager disables CUDA graph capture, which is recommended for compressed-tensors quantized models to ensure compatibility. You may omit it on newer vLLM versions if throughput matters more.

OpenAI-Compatible Client

Once the server is running:

from openai import OpenAI

client = OpenAI(base_url="http://localhost:8000/v1", api_key="token-abc123")

SYSTEM_PROMPT = (
    "أنت مساعد عالم إسلامي متخصص. "
    "أجب على الأسئلة بدقة استناداً إلى القرآن الكريم والسنة النبوية والفقه الإسلامي الكلاسيكي. "
    "استشهد بالمصادر حيثما أمكن. كن موجزاً لكن شاملاً."
)

response = client.chat.completions.create(
    model="NightPrince/Qwen3-4B-Islamic-Arabic-INT4",
    messages=[
        {"role": "system", "content": SYSTEM_PROMPT},
        {"role": "user", "content": "ما هي شروط صحة عقد البيع في الفقه الإسلامي؟"},
    ],
    max_tokens=512,
    temperature=0.7,
    top_p=0.9,
)
print(response.choices[0].message.content)

Multi-GPU Serving

# Two GPUs for higher throughput
vllm serve NightPrince/Qwen3-4B-Islamic-Arabic-INT4 \
    --quantization compressed-tensors \
    --dtype float16 \
    --enforce-eager \
    --tensor-parallel-size 2 \
    --max-model-len 8192 \
    --port 8000

Quantization Details

Property Value
Quantization scheme W4A16 (4-bit weights, 16-bit activations)
Format compressed-tensors (vLLM native)
Quantization tool llm-compressor
lm_head Kept in FP16
Quantized size ~2.5 GB
Source model NightPrince/Qwen3-4B-Islamic-Arabic (FP16, 7.6 GB)

Hardware Requirements

Configuration VRAM Required
Single GPU (INT4) ~3–4 GB (fits on 8 GB GPU)
Single GPU + long context (8K) ~6–8 GB
Recommended minimum 1× 8 GB GPU

Note

Quantized with llm-compressor W4A16 scheme. The lm_head layer is kept in FP16 to preserve logit quality. This model is designed for vLLM with --quantization compressed-tensors and is not compatible with transformers quantization backends (GPTQ, AWQ). For CPU or llama.cpp inference, use the GGUF variant instead.


Citation

@misc{alnwsany2026qwen3islamicarbic,
  author       = {Yahya Alnwsany},
  title        = {Qwen3-4B-Islamic-Arabic: QLoRA Fine-Tuning of Qwen3-4B on Islamic Arabic Q\&A},
  year         = {2026},
  howpublished = {\url{https://huggingface.co/NightPrince/Qwen3-4B-Islamic-Arabic}},
  note         = {Base model: Qwen/Qwen3-4B. Dataset: NightPrince/islamic-arabic-qa.}
}

License

Apache 2.0 — consistent with the base model Qwen/Qwen3-4B.

Downloads last month
44
Safetensors
Model size
4B params
Tensor type
I64
·
I32
·
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for NightPrince/Qwen3-4B-Islamic-Arabic-INT4

Finetuned
Qwen/Qwen3-4B
Quantized
(4)
this model

Dataset used to train NightPrince/Qwen3-4B-Islamic-Arabic-INT4