Any plan to release DepthLM 3B/7B checkpoints (or a distilled/lightweight variant)?
Hi DepthLM team, thanks for releasing the model/code!
I’m working on a research/engineering project based on DepthLM and would like to deploy it on resource-limited hardware (e.g., 1–2 GPUs or embedded devices).
In the paper it’s mentioned that smaller variants (3B/7B) were trained, but I couldn’t find public checkpoints yet.
Questions:
- “Do you plan to release the 3B/7B checkpoints (or a distilled/lighter version) publicly?”
- “If not, is there a recommended recipe for reproducing a smaller model (e.g., distillation settings, target backbones, quantization/export suggestions)?”
Any guidance would be greatly appreciated. Thank you!
Thx for the interest! Unfortunately, due to the policy constraints, we cannot release the 3b and 7b models. However, we have released the training code to enable full reproduction. To enable your application, I would suggest you try on smaller scale training then our full paper, sth like 8M images + 3b model + lora should give you sufficient performance while requiring reasonable amount of training resources. Hope this helps. You can also perform distillation by using our 12B released model, which should allow you to use small VLMs on customized datasets.