AI-ML
Ollama v0.30.0
RESUMEN
This version of Ollama will change the architecture to directly support llama.cpp instead of building on top of GGML, and allows for compatibility with GGUF file format. MLX is used to accelerate model inference on Apple Silicon. While in pre-release we'd love [feedback](
Descripción Detallada
This version of Ollama will change the architecture to directly support llama.cpp instead of building on top of GGML, and allows for compatibility with GGUF file format. MLX is used to accelerate model inference on Apple Silicon. While in pre-release we'd love feedback on: Performance improvements or degradation Errors or crashes that did not previously occur Memory utilization improvements or degradation Known issues: `laguna-xs.2` is not supported yet on this pre-release * `llama3.2-vision` is not supported yet on this pre-release Installing: Mac/Linux ``` curl -fsSL | OLLAMAVERSION=0.30.0-rc15 sh ``` Windows ``` $env:OLLAMAVERSION="0.30.0-rc15"; irm | iex ```
Explicación con IA
Genera un resumen en lenguaje claro de los cambios de este release.
Releases Relacionados
AI-ML
Ollama v0.24.0
## What's Changed * mlx: add memory trace logging by @dhiltgen in https://github.com/ollama/ollama/pull/16131 * launch: codex app integration by @ParthSareen in https://github.com/ollama/ollama/pull/16120 **Full Changelog**: https://github.com/ollama/ollama/compare/v0.23.4...v0.24.0-rc0
AI-ML
Ollama v0.23.4
## What's Changed * `ollama launch opencode` now supports vision models with image inputs * Fixed formatting of Claude tool results when using local image paths **Full Changelog**: https://github.com/ollama/ollama/compare/v0.23.3...v0.23.4
AI-ML