Bib Projects
bibproj
AI & ML interests
None yet
Recent Activity
updated
a collection
3 days ago
Translation models (MLX)
updated
a model
3 days ago
mlx-community/translategemma-27b-it-bf16
published
a model
3 days ago
mlx-community/translategemma-27b-it-bf16
Organizations
Quantization question
1
#1 opened 25 days ago
by
Galathana
Anyone running this with M4 Max 128gb? How does it compare to 4bit quantization?
5
#1 opened about 1 month ago
by
tumma72
Feedback
17
#1 opened about 1 month ago
by
bibproj
Running the model with a dense attention
3
#35 opened about 1 month ago
by
sszymczyk
Can the M2 Ultra Mac Pro with 192GB memory run this model?
23
#1 opened about 1 month ago
by
HanningLiu
Template Think issue
3
#2 opened about 1 month ago
by
marutichintan
[SOLVED] No instruction following, Model just outputs vaguely relevant text, or goes into loops
5
#1 opened 2 months ago
by
bibproj
how much memory to run with 8k ctx?
1
#1 opened about 1 month ago
by
celsowm
Please create 4 bit dwq quant mlx version
2
#1 opened about 2 months ago
by
Narutoouz
Not working in LM Studio (Mac)
➕
1
8
#1 opened 2 months ago
by
riddhidutta
shisa-v2-llama3.1-405b-Q8_0-00010-of-00010.gguf is missing
#1 opened about 2 months ago
by
bibproj
TypeError: TextConfig.__init__() missing 1 required positional argument: 'rope_theta'
7
#1 opened 2 months ago
by
bibproj
Error When Loading Model
➕
1
3
#1 opened 2 months ago
by
Sportsandfragrance
Update README.md
#1 opened 2 months ago
by
bibproj
Minnimax-M2-max-8bit-gs32 not supported
1
#1 opened 3 months ago
by
Elonqq
general questions
4
#8 opened 3 months ago
by
Hansi2024
Dwq quant
1
#1 opened 4 months ago
by
sm54
How to create something similar for DeepSeek V3-0324?
12
#1 opened 6 months ago
by
bibproj