AI-ML
Ollama v0.23.2
RESUMEN
What's Changed `ollama launch` no longer includes Claude Desktop due to the third-party integration being limited to Anthropic models. Use `ollama launch claude-desktop --restore` to restore Claude Desktop to its normal state. * `/api/show` responses are now cached, improving median lat
Descripción Detallada
What's Changed `ollama launch` no longer includes Claude Desktop due to the third-party integration being limited to Anthropic models. Use `ollama launch claude-desktop --restore` to restore Claude Desktop to its normal state. `/api/show` responses are now cached, improving median latency by ~6.7x which will increase load speed for integrations like VS Code. Improved backup workflow when managing launch integrations * Cleaner image generation layout in the MLX runner Full Changelog:
Explicación con IA
Genera un resumen en lenguaje claro de los cambios de este release.
Releases Relacionados
AI-ML
Ollama v0.23.1
## Gemma 4 MTP (Multi-token Processing) for the MLX runner Gemma 4 MTP speculative decoding is now supported on Macs. This can give over a 2x speed increase for the Gemma 4 31B model on coding tasks. ``` ollama run gemma4:31b-coding-mtp-bf16 ``` ## What's Changed * Update MLX and MLX-C wit
AI-ML
Ollama v0.23.0
## Claude Desktop Claude Desktop is now supported with Ollama Launch. Claude Cowork and Claude Code are supported within the Claude Desktop App. ``` ollama launch claude-desktop ``` ### Claude Cowork <img width="1272" height="872" alt="ca1" src="https://github.com/user-attachments/asse