| Release list | ||
|---|---|---|
| 0.13.3 | RECENT | |
| 0.13.2 | ||
| 0.13.1 | ||
| 0.13.0 | ||
| 0.12.11 | ||
| 0.12.10 | ||
| 0.12.9 | ||
| 0.12.8 | ||
| 0.12.7 | ||
| 0.12.6 | ||
| 0.12.5 | ||
| 0.12.4 | ||
| 0.12.3 | ||
| 0.12.2 | ||
| 0.12.1 | ||
| 0.12.0 | ||
| 0.11.11 | ||
| 0.11.10 | ||
| 0.11.9 | ||
| 0.11.8 | ||
qwen3-coder would act in raw mode when using /api/generate or ollama run qwen3-coder <prompt>qwen3-embedding providing invalid resultsnum_gpu is settool_index with a value of 0 would not be sent to the modelExperimental support for Vulkan is now available when you build locally from source. This will enable additional GPUs from AMD, and Intel which are not currently supported by Ollama. To build locally, install the Vulkan SDK and set VULKAN_SDK in your environment, then follow the developer instructions. In a future release, Vulkan support will be included in the binary release as well. Please file issues if you run into any problems.
Full Changelog: https://github.com/ollama/ollama/compare/v0.12.5...v0.12.6