Lanzado el 5 de febrero de 2026
New Model additions EXAONE-MoE <img width="2278" height="1142" alt="image" src=" /> K-EXAONE is a large-scale multilingual language model developed by LG AI Research. Built using a Mixture-of-Experts arch
New Model additions EXAONE-MoE <img width="2278" height="1142" alt="image" src=" /> K-EXAONE is a large-scale multilingual language model developed by LG AI Research. Built using a Mixture-of-Experts architecture, K-EXAONE features 236 billion total parameters, with 23 billion active during inference. Performance evaluations across various benchmarks demonstrate that K-EXAONE excels in reasoning, agentic capabilities, general knowledge, multilingual understanding, and long-context processing. Add EXAONE-MoE implementations by @nuxlear PP-DocLayoutV3 <img width="6252" height="1892" alt="image" src=" /> PP-DocLayoutV3 is a unified and high-efficiency model designed for comprehensive layout analysis. It addresses the challenges of complex physical distortions—such as skewing, curving, and adverse lighting—by integrating instance segmentation and reading order prediction into a single, end-to-end framework. [Model] Add PP-DocLayoutV3 Model Support by @zhang-prog Youtu-LLM <img width="564" height="352" alt="image" src=" /> Youtu-LLM is a new, small, yet powerful LLM, contains only 1.96B parameters, supports 128k long context, and has native agentic talents. On general eval
Genera un resumen en lenguaje claro de los cambios de este release, pensado para desarrolladores.