repo_name
stringlengths
2
22
repo_link
stringlengths
28
60
category
stringlengths
3
39
github_about_section
stringlengths
22
415
homepage_link
stringlengths
14
89
github_topic_closest_fit
stringlengths
3
28
contributors_all
int64
2
7.09k
contributors_2025
int64
0
2.38k
contributors_2024
int64
0
2.13k
contributors_2023
int64
0
1.92k
contributors_2026_q1
int64
0
1.36k
llvm-project
https://github.com/llvm/llvm-project
compiler
The LLVM Project is a collection of modular and reusable compiler and toolchain technologies.
http://llvm.org
compiler
7,086
2,378
2,130
1,920
1,364
vllm
https://github.com/vllm-project/vllm
inference engine
A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
inference
2,351
1,369
579
145
698
pytorch
https://github.com/pytorch/pytorch
machine learning framework
Tensors and Dynamic neural networks in Python with strong GPU acceleration
https://pytorch.org
machine-learning
5,690
1,187
1,090
1,024
560
transformers
https://github.com/huggingface/transformers
multi-purpose library
Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
https://huggingface.co/transformers
machine-learning
3,742
860
769
758
222
sglang
https://github.com/sgl-project/sglang
inference engine
SGLang is a fast serving framework for large language models and vision language models.
https://docs.sglang.ai
inference
1,267
796
189
1
504
hhvm
https://github.com/facebook/hhvm
virtual machine
A virtual machine for executing programs written in Hack.
https://hhvm.com
virtual-machine
2,773
692
648
604
383
llama.cpp
https://github.com/ggml-org/llama.cpp
inference engine
LLM inference in C/C++
https://ggml.ai
inference
1,573
535
575
461
246
kubernetes
https://github.com/kubernetes/kubernetes
container orchestration
Production-Grade Container Scheduling and Management
https://kubernetes.io
kubernetes
5,158
542
499
565
233
tensorflow
https://github.com/tensorflow/tensorflow
machine learning framework
An Open Source Machine Learning Framework for Everyone
https://tensorflow.org
machine-learning
4,679
506
523
630
257
verl
https://github.com/volcengine/verl
reinforcement learning
verl: Volcano Engine Reinforcement Learning for LLMs
https://verl.readthedocs.io
deep-reinforcement-learning
584
454
10
0
153
rocm-systems
https://github.com/ROCm/rocm-systems
multi-purpose library
super repo for rocm systems projects
https://amd.com/en/products/software/rocm.html
amd
1,174
498
351
213
250
ray
https://github.com/ray-project/ray
multi-purpose library
Ray is an AI compute engine. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
https://ray.io
machine-learning
1,473
397
223
230
173
spark
https://github.com/apache/spark
data processing
Apache Spark - A unified analytics engine for large-scale data processing
https://spark.apache.org
data-processing
3,139
322
300
336
132
goose
https://github.com/block/goose
agent
an open source, extensible AI agent that goes beyond code suggestions - install, execute, edit, and test with any LLM
https://block.github.io/goose
ai-agents
439
319
32
0
126
elasticsearch
https://github.com/elastic/elasticsearch
search engine
Free and Open Source, Distributed, RESTful Search Engine
https://elastic.co/products/elasticsearch
search-engine
2,344
316
284
270
200
jax
https://github.com/jax-ml/jax
scientific computing
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
https://docs.jax.dev
scientific-computing
1,037
317
280
202
130
modelcontextprotocol
https://github.com/modelcontextprotocol/modelcontextprotocol
mcp
Specification and documentation for the Model Context Protocol
https://modelcontextprotocol.io
mcp
368
301
42
0
67
executorch
https://github.com/pytorch/executorch
model compiler
On-device AI across mobile, embedded and edge for PyTorch
https://executorch.ai
inference
503
267
243
77
136
numpy
https://github.com/numpy/numpy
scientific computing
The fundamental package for scientific computing with Python.
https://numpy.org
scientific-computing
2,217
237
233
252
80
triton
https://github.com/triton-lang/triton
parallel computing dsl
Development repository for the Triton language and compiler
https://triton-lang.org
parallel-programming
562
233
206
159
105
modular
https://github.com/modular/modular
parallel computing
The Modular Platform (includes MAX & Mojo)
https://docs.modular.com
parallel-programming
419
222
205
99
149
scipy
https://github.com/scipy/scipy
scientific computing
SciPy library main repository
https://scipy.org
scientific-computing
2,011
213
251
245
74
ollama
https://github.com/ollama/ollama
inference engine
Get up and running with OpenAI gpt-oss, DeepSeek-R1, Gemma 3 and other models.
https://ollama.com
inference
599
202
314
97
40
trl
https://github.com/huggingface/trl
reinforcement learning
Train transformer language models with reinforcement learning.
http://hf.co/docs/trl
reinforcement-learning
474
189
154
122
59
flashinfer
https://github.com/flashinfer-ai/flashinfer
gpu kernels
FlashInfer: Kernel Library for LLM Serving
https://flashinfer.ai
attention
268
158
50
11
86
aiter
https://github.com/ROCm/aiter
gpu kernels
AI Tensor Engine for ROCm
https://rocm.blogs.amd.com/software-tools-optimization/aiter-ai-tensor-engine/README.html
null
227
145
10
0
117
LMCache
https://github.com/LMCache/LMCache
inference
Supercharge Your LLM with the Fastest KV Cache Layer
https://lmcache.ai
null
175
144
18
0
42
Mooncake
https://github.com/kvcache-ai/Mooncake
inference
Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.
https://kvcache-ai.github.io/Mooncake
inference
195
133
13
0
80
torchtitan
https://github.com/pytorch/torchtitan
training framework
A PyTorch native platform for training generative AI models
https://arxiv.org/abs/2410.06511
null
187
119
43
1
59
ao
https://github.com/pytorch/ao
quantization
PyTorch native quantization and sparsity for training and inference
https://pytorch.org/ao
quantization
219
114
100
5
67
ComfyUI
https://github.com/comfyanonymous/ComfyUI
user interface
The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
https://comfy.org
stable-diffusion
312
108
119
94
60
unsloth
https://github.com/unslothai/unsloth
fine tuning
Fine-tuning & Reinforcement Learning for LLMs. Train OpenAI gpt-oss, DeepSeek-R1, Qwen3, Gemma 3, TTS 2x faster with 70% less VRAM.
https://docs.unsloth.ai
fine-tuning
186
108
29
3
55
accelerate
https://github.com/huggingface/accelerate
training framework
A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support.
https://huggingface.co/docs/accelerate
null
415
97
124
149
25
terminal-bench
https://github.com/laude-institute/terminal-bench
benchmark
A benchmark for LLMs on complicated tasks in the terminal
https://tbench.ai
benchmark
96
96
0
0
2
DeepSpeed
https://github.com/deepspeedai/DeepSpeed
training framework
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
https://deepspeed.ai
null
460
96
134
165
30
milvus
https://github.com/milvus-io/milvus
vector database
Milvus is a high-performance, cloud-native vector database built for scalable vector ANN search
https://milvus.io
vector-search
399
95
84
72
49
cutlass
https://github.com/NVIDIA/cutlass
parallel computing
CUDA Templates and Python DSLs for High-Performance Linear Algebra
https://docs.nvidia.com/cutlass/index.html
parallel-programming
266
94
64
66
39
tilelang
https://github.com/tile-ai/tilelang
parallel computing dsl
Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels
https://tilelang.com
parallel-programming
121
89
1
0
50
monarch
https://github.com/meta-pytorch/monarch
distributed computing
PyTorch Single Controller
https://meta-pytorch.org/monarch
null
103
85
0
0
45
Liger-Kernel
https://github.com/linkedin/Liger-Kernel
kernel examples
Efficient Triton Kernels for LLM Training
https://openreview.net/pdf?id=36SjAIT42G
triton
140
78
61
0
31
hipBLASLt
https://github.com/AMD-AGI/hipBLASLt
Basic Linear Algebra Subprograms (BLAS)
hipBLASLt is a library that provides general matrix-matrix operations with a flexible API and extends functionalities beyond a traditional BLAS library
https://rocm.docs.amd.com/projects/hipBLASLt
matrix-multiplication
111
69
70
35
0
peft
https://github.com/huggingface/peft
fine tuning
PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
https://huggingface.co/docs/peft
null
292
69
111
115
25
ROCm
https://github.com/ROCm/ROCm
multi-purpose library
AMD ROCm Software - GitHub Home
https://rocm.docs.amd.com
null
168
67
61
44
25
mcp-agent
https://github.com/lastmile-ai/mcp-agent
mcp
Build effective agents using Model Context Protocol and simple workflow patterns
null
mcp
64
63
1
0
1
onnx
https://github.com/onnx/onnx
machine learning interoperability
Open standard for machine learning interoperability
https://onnx.ai
onnx
382
56
45
61
21
letta
https://github.com/letta-ai/letta
agent
Letta is the platform for building stateful agents: open AI with advanced memory that can learn and self-improve over time.
https://docs.letta.com
ai-agents
159
57
75
47
16
helion
https://github.com/pytorch/helion
parallel computing dsl
A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.
https://helionlang.com
parallel-programming
70
49
0
0
41
openevolve
https://github.com/codelion/openevolve
evolutionary algorithm
Open-source implementation of AlphaEvolve
null
genetic-algorithm
51
46
0
0
7
lightning-thunder
https://github.com/Lightning-AI/lightning-thunder
model compiler
PyTorch compiler that accelerates training and inference. Get built-in optimizations for performance, memory, parallelism, and easily write your own.
null
null
79
44
47
29
7
truss
https://github.com/basetenlabs/truss
inference engine
The simplest way to serve AI/ML models in production
https://truss.baseten.co
inference
84
44
30
21
30
cuda-python
https://github.com/NVIDIA/cuda-python
middleware
CUDA Python: Performance meets Productivity
https://nvidia.github.io/cuda-python
parallel-programming
54
41
12
1
16
warp
https://github.com/NVIDIA/warp
spatial computing
A Python framework for accelerated simulation, data generation and spatial computing.
https://nvidia.github.io/warp
physics-simulation
90
40
29
17
24
metaflow
https://github.com/Netflix/metaflow
container orchestration
Build, Manage and Deploy AI/ML Systems
https://metaflow.org
null
132
37
35
28
23
numba
https://github.com/numba/numba
compiler
NumPy aware dynamic Python compiler using LLVM
https://numba.pydata.org
null
449
40
32
55
26
SWE-bench
https://github.com/SWE-bench/SWE-bench
benchmark
SWE-bench: Can Language Models Resolve Real-world Github Issues?
https://swebench.com
benchmark
66
33
37
9
2
Triton-distributed
https://github.com/ByteDance-Seed/Triton-distributed
distributed computing
Distributed Compiler based on Triton for Parallel Systems
https://triton-distributed.readthedocs.io
null
37
30
0
0
11
ThunderKittens
https://github.com/HazyResearch/ThunderKittens
parallel computing
Tile primitives for speedy kernels
https://hazyresearch.stanford.edu/blog/2024-10-29-tk2
parallel-programming
37
29
13
0
6
dstack
https://github.com/dstackai/dstack
container orchestration
dstack is an open-source control plane for running development, training, and inference jobs on GPUs-across hyperscalers, neoclouds, or on-prem.
https://dstack.ai
orchestration
69
28
42
14
9
ome
https://github.com/sgl-project/ome
container orchestration
OME is a Kubernetes operator for enterprise-grade management and serving of Large Language Models (LLMs)
http://docs.sglang.ai/ome
k8s
31
28
0
0
13
server
https://github.com/triton-inference-server/server
inference server
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/index.html
inference
150
24
36
34
7
ccache
https://github.com/ccache/ccache
compiler
ccache - a fast compiler cache
https://ccache.dev
null
225
20
28
22
10
lapack
https://github.com/Reference-LAPACK/lapack
linear algebra
LAPACK is a library of Fortran subroutines for solving the most commonly occurring problems in numerical linear algebra.
https://netlib.org/lapack
linear-algebra
187
23
25
42
11
quack
https://github.com/Dao-AILab/quack
kernel examples
A Quirky Assortment of CuTe Kernels
null
null
35
17
0
0
14
KernelBench
https://github.com/ScalingIntelligence/KernelBench
benchmark
KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA problems
https://scalingintelligence.stanford.edu/blogs/kernelbench
benchmark
21
16
3
0
6
reference-kernels
https://github.com/gpu-mode/reference-kernels
kernel examples
Official Problem Sets / Reference Kernels for the GPU MODE Leaderboard!
https://gpumode.com
null
24
16
0
0
13
synthetic-data-kit
https://github.com/meta-llama/synthetic-data-kit
synthetic data generation
Tool for generating high quality Synthetic datasets
https://pypi.org/project/synthetic-data-kit
synthetic-dataset-generation
15
15
0
0
0
tritonparse
https://github.com/meta-pytorch/tritonparse
performance testing
TritonParse: A Compiler Tracer, Visualizer, and Reproducer for Triton Kernels
https://meta-pytorch.org/tritonparse
null
27
15
0
0
15
kernels
https://github.com/huggingface/kernels
gpu kernels
Load compute kernels from the Hub
null
null
27
14
2
0
10
Wan2.2
https://github.com/Wan-Video/Wan2.2
video generation
Wan: Open and Advanced Large-Scale Video Generative Models
https://wan.video
diffusion-models
16
14
0
0
3
Primus-Turbo
https://github.com/AMD-AGI/Primus-Turbo
training framework
Primus-Turbo is a high-performance acceleration library dedicated to large-scale model training on AMD GPUs. Built and optimized for the AMD ROCm platform, it covers the full training stack — including core compute operators (GEMM, Attention, GroupedGEMM), communication primitives, optimizer modules, low-precision comp...
null
null
14
12
0
0
6
flashinfer-bench
https://github.com/flashinfer-ai/flashinfer-bench
benchmark
Building the Virtuous Cycle for AI-driven LLM Systems
https://bench.flashinfer.ai
benchmark
18
11
0
0
9
FTorch
https://github.com/Cambridge-ICCS/FTorch
middleware
A library for directly calling PyTorch ML models from Fortran.
https://cambridge-iccs.github.io/FTorch
machine-learning
22
12
8
9
4
TensorRT
https://github.com/NVIDIA/TensorRT
inference engine
NVIDIA TensorRT is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
https://developer.nvidia.com/tensorrt
null
104
10
18
19
4
TileIR
https://github.com/microsoft/TileIR
parallel computing dsl
TileIR (tile-ir) is a concise domain-specific IR designed to streamline the development of high-performance GPU/CPU kernels (e.g., GEMM, Dequant GEMM, FlashAttention, LinearAttention). By employing a Pythonic syntax with an underlying compiler infrastructure on top of TVM, TileIR allows developers to focus on productiv...
null
parallel-programming
10
10
1
0
0
kernels-community
https://github.com/huggingface/kernels-community
gpu kernels
Kernel sources for https://huggingface.co/kernels-community
https://huggingface.co/kernels-community
null
15
9
0
0
11
GEAK-agent
https://github.com/AMD-AGI/GEAK-agent
agent
It is an LLM-based AI agent, which can write correct and efficient gpu kernels automatically.
null
ai-agents
20
9
0
0
12
intelliperf
https://github.com/AMDResearch/intelliperf
performance testing
Automated bottleneck detection and solution orchestration
https://arxiv.org/html/2508.20258v1
profiling
7
7
0
0
2
cudnn-frontend
https://github.com/NVIDIA/cudnn-frontend
parallel computing
cudnn_frontend provides a c++ wrapper for the cudnn backend API and samples on how to use it
https://developer.nvidia.com/cudnn
parallel-programming
14
6
5
1
3
BitBLAS
https://github.com/microsoft/BitBLAS
Basic Linear Algebra Subprograms (BLAS)
BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.
null
matrix-multiplication
17
5
14
0
0
Self-Forcing
https://github.com/guandeh17/Self-Forcing
video generation
Official codebase for "Self Forcing: Bridging Training and Inference in Autoregressive Video Diffusion" (NeurIPS 2025 Spotlight)
https://self-forcing.github.io
diffusion-models
4
4
0
0
0
TritonBench
https://github.com/thunlp/TritonBench
benchmark
TritonBench: Benchmarking Large Language Model Capabilities for Generating Triton Operators
https://arxiv.org/abs/2502.14752
benchmark
3
3
0
0
0
hatchet
https://github.com/LLNL/hatchet
performance testing
Graph-indexed Pandas DataFrames for analyzing hierarchical performance data
https://llnl-hatchet.readthedocs.io
profiling
25
3
6
8
1
streamv2v
https://github.com/Jeff-LiangF/streamv2v
video generation
Official Pytorch implementation of StreamV2V.
https://jeff-liangf.github.io/projects/streamv2v
diffusion-models
7
3
6
0
0
mistral-inference
https://github.com/mistralai/mistral-inference
inference engine
Official inference library for Mistral models
https://mistral.ai
inference
30
2
17
14
1
omnitrace
https://github.com/ROCm/omnitrace
performance testing
Omnitrace: Application Profiling, Tracing, and Analysis
https://rocm.docs.amd.com/projects/omnitrace
profiling
16
2
12
2
0
IMO2025
https://github.com/harmonic-ai/IMO2025
formal mathematical reasoning
Harmonic's model Aristotle achieved gold medal performance, solving 5 problems. This repository contains the lean statement files and proofs for Problems 1-5.
https://harmonic.fun
lean
2
2
0
0
0
RaBitQ
https://github.com/gaoj0017/RaBitQ
quantization
[SIGMOD 2024] RaBitQ: Quantizing High-Dimensional Vectors with a Theoretical Error Bound for Approximate Nearest Neighbor Search
https://github.com/VectorDB-NTU/RaBitQ-Library
nearest-neighbor-search
2
2
1
0
1
torchdendrite
https://github.com/sandialabs/torchdendrite
machine learning framework
Dendrites for PyTorch and SNNTorch neural networks
null
null
2
1
1
0
0
triton-runner
https://github.com/toyaix/triton-runner
debugger
Multi-Level Triton Runner supporting Python, IR, PTX, and cubin.
https://triton-runner.org
null
2
1
0
0
2
triSYCL
https://github.com/triSYCL/triSYCL
parallel computing
Generic system-wide modern C++ for heterogeneous platforms with SYCL from Khronos Group
https://trisycl.github.io/triSYCL/Doxygen/triSYCL/html/index.html
parallel-programming
31
0
1
3
0
StreamDiffusion
https://github.com/cumulo-autumn/StreamDiffusion
image generation
StreamDiffusion: A Pipeline-Level Solution for Real-Time Interactive Generation
https://arxiv.org/abs/2312.12491
diffusion-models
29
0
9
25
0
wandb
https://github.com/wandb/wandb
ml visualization
The AI developer platform. Use Weights & Biases to train and fine-tune models, and manage models from experimentation to production.
https://wandb.ai
null
238
46
67
62
24
aws-neuron-sdk
https://github.com/aws-neuron/aws-neuron-sdk
sdk
Powering AWS purpose-built machine learning chips. Blazing fast and cost effective, natively integrated into PyTorch and TensorFlow and integrated with your favorite AWS services
https://aws.amazon.com/ai/machine-learning/neuron
null
145
33
37
32
10
onnxruntime
https://github.com/microsoft/onnxruntime
machine learning interoperability
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
https://onnxruntime.ai
null
877
237
213
213
107
ort
https://github.com/pykeio/ort
machine learning interoperability
Fast ML inference & training for ONNX models in Rust
https://ort.pyke.io
null
70
25
20
21
11
Triton-distributed
https://github.com/ByteDance-Seed/Triton-distributed
distributed computing
Distributed Compiler based on Triton for Parallel Systems
https://triton-distributed.readthedocs.io
null
37
30
0
0
11
gemlite
https://github.com/dropbox/gemlite
gpu kernels
Fast low-bit matmul kernels in Triton
null
null
5
1
5
0
1
cutile-python
https://github.com/NVIDIA/cutile-python
parallel computing
cuTile is a programming model for writing parallel kernels for NVIDIA GPUs
https://docs.nvidia.com/cuda/cutile-python
null
20
10
0
0
14
tilus
https://github.com/NVIDIA/tilus
parallel computing
Tilus is a tile-level kernel programming language with explicit control over shared memory and registers.
https://nvidia.github.io/tilus
null
7
4
0
0
3
triton-windows
https://github.com/woct0rdho/triton-windows
parallel computing dsl
Fork of the Triton language and compiler for Windows support and easy installation
null
null
537
233
207
159
67