MetalRT iOS Models

Pre-packaged MetalRT model archives for the RunAnywhere iOS app. Each .tar.gz contains all files needed for on-device inference via MetalRT's custom Metal GPU kernels.

Models

Archive Category Source Size
qwen3-0.6b-metalrt.tar.gz LLM runanywhere/qwen3_0.6B_MLX_4bit -
qwen3-4b-metalrt.tar.gz LLM runanywhere/qwen3_4B_mlx_4bit -
llama3-3b-metalrt.tar.gz LLM runanywhere/Llama_32_3B_4bit -
lfm25-1.2b-metalrt.tar.gz LLM mlx-community/LFM2.5-1.2B-Instruct-4bit -
whisper-tiny-metalrt.tar.gz STT runanywhere/whisper_tiny_4bit -
whisper-small-metalrt.tar.gz STT runanywhere/whisper_small_4bit -
kokoro-metalrt.tar.gz TTS runanywhere/kokoro_bf16 -
qwen3-vl-2b-metalrt.tar.gz VLM runanywhere/Qwen3-VL-2B-Instruct-4bit -
lfm25-vl-metalrt.tar.gz VLM mlx-community/LFM2.5-VL-1.6B-6bit -

Usage

These archives are automatically downloaded by the RunAnywhere iOS example app. Each archive extracts to a directory containing model.safetensors, config.json, tokenizer.json, and other model-specific files.

Download URLs

https://huggingface.co/runanywhere/metalrt-ios/resolve/main/qwen3-0.6b-metalrt.tar.gz
https://huggingface.co/runanywhere/metalrt-ios/resolve/main/qwen3-4b-metalrt.tar.gz
https://huggingface.co/runanywhere/metalrt-ios/resolve/main/llama3-3b-metalrt.tar.gz
https://huggingface.co/runanywhere/metalrt-ios/resolve/main/lfm25-1.2b-metalrt.tar.gz
https://huggingface.co/runanywhere/metalrt-ios/resolve/main/whisper-tiny-metalrt.tar.gz
https://huggingface.co/runanywhere/metalrt-ios/resolve/main/whisper-small-metalrt.tar.gz
https://huggingface.co/runanywhere/metalrt-ios/resolve/main/kokoro-metalrt.tar.gz
https://huggingface.co/runanywhere/metalrt-ios/resolve/main/qwen3-vl-2b-metalrt.tar.gz
https://huggingface.co/runanywhere/metalrt-ios/resolve/main/lfm25-vl-metalrt.tar.gz
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support