Leco Li PRO
imnotkitty
AI & ML interests
None yet
Recent Activity
reacted
to
their
post
with 🚀
about 9 hours ago
The 2025 Chinese LLM Showdown: Western Models Still Dominate Top 4, but China Leads the Open-Source Arena.
🏆 The Champions: Claude-Opus-4.5, Gemini-3-Pro, GPT-5.2, and Gemini-3-Flash sweep the top four spots.
🚀 The Pursuers: Doubao and DeepSeek-V3.2 tie for first place among Chinese models; GLM-4.7, ERNIE-5.0, and Kimi secure their positions in the domestic top five.
🔥 The Biggest Highlight: The top three spots on the open-source leaderboard are entirely held by Team China (DeepSeek, GLM, Kimi), outperforming the best western open-source models.
reacted
to
their
post
with 🔥
about 9 hours ago
đź‘€Just published a first-look at Tencent HunyuanImage 3.0-Instruct!
Tested its multi-image fusion and single-reference consistency. The results on complex prompts are quite impressive.
What’s the most creative image task you’d give it?
👉 Read the full analysis: https://huggingface.co/blog/imnotkitty/tencent-hy-image-v30-i2i
reacted
to
their
post
with 🔥
about 9 hours ago
📌Same day, Two Releases.
Jan 27th just got interesting on Open-source AI modles.
âś…Kimi K2.5: How to make models "think" across text and vision natively?
https://huggingface.co/moonshotai/Kimi-K2.5
âś…DeepSeek-OCR 2: How to make models "see" more like humans, not scanners?
https://huggingface.co/deepseek-ai/DeepSeek-OCR-2
One focuses on depth of reasoning, the other on precision of vision.
What's the key differentiator for a multimodal model in your view: raw power or computational elegance?