How to use from
Docker Model Runner
docker model run hf.co/sarathvaddi/vaddi-llm-v1
Quick Links

vaddi-llm-v1

vaddi-llm-v1 is a custom AI assistant built and trained by Sarath Vaddi on Meta's LLaMA 3.1 infrastructure. Unlike general-purpose models, vaddi-llm-v1 is purpose-built for three specialized domains - Java & Spring Boot engineering, Bible study & Christian theology, and AIS maritime systems.

๐ŸŽฏ Core Specializations

๐Ÿ’ป Java, Spring Boot & Cloud Engineering

  • Backend development & microservices
  • REST APIs & system design
  • Cloud-native patterns (AWS, Kubernetes, Docker)

๐Ÿ“– Bible Study & Christian Theology

  • Scripture explanation & contextual insights
  • Theological discussions & faith-based Q&A
  • Old & New Testament coverage

๐Ÿšข AIS Maritime Systems

  • Vessel tracking & AIS data processing
  • NMEA protocol & message decoding
  • VTS operations & maritime intelligence

โšก Try It Live

Chat with this model at AskVaddi

๐Ÿ‘จโ€๐Ÿ’ป Author

Sarath Vaddi โ€” Independent AI Developer

๐Ÿ”ฎ Vision

Depth over breadth โ€” optimized for real-world usage in specific domains rather than generic responses.

Downloads last month
37
GGUF
Model size
8B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for sarathvaddi/vaddi-llm-v1

Quantized
(631)
this model

Space using sarathvaddi/vaddi-llm-v1 1