π¦ Imina-Na V2: The Autonomous DePIN Security Oracle
(Upload your lion logo to the repo to show this)
Imina-Na V2 is a highly specialized 7-Billion parameter Vision-Language Model (VLM), fine-tuned explicitly to detect malicious transaction graphs and anomalies within the Agentic Economy and DePIN (Decentralized Physical Infrastructure Networks) ecosystems.
Developed as the core cognitive engine for the Sigui Protocol, this model acts as a synchronous, sub-50ms security oracle that evaluates complex on-chain interactions visually.
π Hardware & AMD MI300X Supremacy
This model was trained and rigorously benchmarked natively on the AMD MI300X accelerator, leveraging the immense power of ROCm 7.0 and Unsloth.
By natively fusing the LoRA adapters into bfloat16 and compiling the model specifically for the MI300X architecture, Imina-Na V2 achieves unprecedented visual inference speeds, making synchronous visual blockchain security a reality.
β‘ Official MI300X Benchmarks
Tested on AMD MI300X (192GB VRAM), ROCm 7.0, Native bfloat16, torch.compile enabled.
- Time-To-First-Token (TTFT):
35.30 msπ - Training Final Loss:
0.09189 - Framework:
Transformers/UnslothAt 35.30 ms, the model can authorize or block a complex DePIN transaction well before the 12-second block finality of Ethereum, essentially preventing hacks before they occur.
π Dataset & Training
Imina-Na V2 was fine-tuned on a robust subset of the Ibonon/sigui-depin-1m dataset.
- Training Scope: 100,000 real-world transaction graphs (rendered as spatial images).
- Networks: Ethereum, Arbitrum, Polygon.
- Methodology: Unsloth 4-bit LoRA optimization.
- Duration: ~8 hours of compute on 1x AMD MI300X. The model learned to visually distinguish between standard agentic workflows (e.g., node registration, staking, standard bridging) and catastrophic exploit topologies (e.g., flash loan attacks, malicious governance takeovers, liquidity draining).
π» Usage
The model is packaged as a standard Hugging Face PEFT adapter. For maximum performance in production, we recommend merging the LoRA weights and serving via vLLM or utilizing torch.compile on an AMD MI300X.
from transformers import Qwen2VLForConditionalGeneration, AutoProcessor
from peft import PeftModel
import torch
# 1. Load Base Model in bfloat16
base_model = Qwen2VLForConditionalGeneration.from_pretrained(
"Qwen/Qwen2-VL-7B-Instruct",
torch_dtype=torch.bfloat16,
device_map="auto"
)
# 2. Attach Imina-Na V2 LoRA & Merge
model = PeftModel.from_pretrained(base_model, "Ibonon/imina_na_v2_lora")
model = model.merge_and_unload()
# 3. Optimize for MI300X
model = torch.compile(model)
processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct")
# 4. Ready for sub-50ms inference
π Cultural Origin: The Sigui
The project is heavily inspired by the Dogon tradition of systemic renewal. Just as the historic African Sigui festival resets societal structures every 60 years, this oracle resets trust in the agentic economy every 5 milliseconds. Built in Ouagadougou.