chore(deps): update dependency ollama/ollama to v0.14.3 - autoclosed #3436

Closed
renovate-bot wants to merge 1 commits from renovate/ollama-ollama-0.x into main
Collaborator

This PR contains the following updates:

Package Update Change
ollama/ollama minor 0.13.30.14.3

⚠️ Warning

Some dependencies could not be looked up. Check the Dependency Dashboard for more information.


Release Notes

ollama/ollama (ollama/ollama)

v0.14.3

Compare Source

Ollama screenshot 2026-01-20 at 23 41 54@​2x
  • Z-Image Turbo: 6 billion parameter text-to-image model from Alibaba’s Tongyi Lab. It generates high-quality photorealistic images.
  • Flux.2 Klein: Black Forest Labs’ fastest image-generation models to date.

New models

  • GLM-4.7-Flash: As the strongest model in the 30B class, GLM-4.7-Flash offers a new option for lightweight deployment that balances performance and efficiency.
  • LFM2.5-1.2B-Thinking: LFM2.5 is a new family of hybrid models designed for on-device deployment.

What's Changed

  • Fixed issue where Ollama's macOS app would interrupt system shutdown
  • Fixed ollama create and ollama show commands for experimental models
  • The /api/generate API can now be used for image generation
  • Fixed minor issues in Nemotron-3-Nano tool parsing
  • Fixed issue where removing an image generation model would cause it to first load
  • Fixed issue where ollama rm would only stop the first model in the list if it were running

Full Changelog: https://github.com/ollama/ollama/compare/v0.14.2...v0.14.3

v0.14.2

Compare Source

New models

  • TranslateGemma: A new collection of open translation models built on Gemma 3, helping people communicate across 55 languages.

What's Changed

  • Shift + Enter (or Ctrl + j) will now enter a newline in Ollama's CLI
  • Improve /v1/responses API to better confirm to OpenResponses specification

New Contributors

Full Changelog: https://github.com/ollama/ollama/compare/v0.14.1...v0.14.2

v0.14.1

Compare Source

Image generation models (experimental)

Experimental image generation models are available for macOS and Linux (CUDA) in Ollama:

Available models
ollama run x/z-image-turbo

Note

: x is a username on ollama.com where experimental models are uploaded

More models coming soon:

  1. Qwen-Image-2512
  2. Qwen-Image-Edit-2511
  3. GLM-Image

What's Changed

  • fix macOS auto-update signature verification failure

New Contributors

Full Changelog: https://github.com/ollama/ollama/compare/v0.14.0...v0.14.1

v0.14.0

Compare Source

What's Changed

  • ollama run --experimental CLI will now open a new Ollama CLI that includes an agent loop and the bash tool
  • Anthropic API compatibility: support for the /v1/messages API
  • A new REQUIRES command for the Modelfile allows declaring which version of Ollama is required for the model
  • For older models, Ollama will avoid an integer underflow on low VRAM systems during memory estimation
  • More accurate VRAM measurements for AMD iGPUs
  • Ollama's app will now highlight swift source code
  • An error will now return when embeddings return NaN or -Inf
  • Ollama's Linux install bundles files now use zst compression
  • New experimental support for image generation models, powered by MLX

New Contributors

Full Changelog: https://github.com/ollama/ollama/compare/v0.13.5...v0.14.0-rc2

v0.13.5

Compare Source

New Models

  • Google's FunctionGemma a specialized version of Google's Gemma 3 270M model fine-tuned explicitly for function calling.

What's Changed

  • bert architecture models now run on Ollama's engine
  • Added built-in renderer & tool parsing capabilities for DeepSeek-V3.1
  • Fixed issue where nested properties in tools may not have been rendered properly

New Contributors

Full Changelog: https://github.com/ollama/ollama/compare/v0.13.4...v0.13.5

v0.13.4

Compare Source

New Models

  • Nemotron 3 Nano: A new Standard for Efficient, Open, and Intelligent Agentic Models
  • Olmo 3 and Olmo 3.1: A series of Open language models designed to enable the science of language models. These models are pre-trained on the Dolma 3 dataset and post-trained on the Dolci datasets.

What's Changed

  • Enable Flash Attention automatically for models by default
  • Fixed handling of long contexts with Gemma 3 models
  • Fixed issue that would occur with Gemma 3 QAT models or other models imported with the Gemma 3 architecture

New Contributors

Full Changelog: https://github.com/ollama/ollama/compare/v0.13.3...v0.13.4-rc0


Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR has been generated by Renovate Bot.

This PR contains the following updates: | Package | Update | Change | |---|---|---| | [ollama/ollama](https://github.com/ollama/ollama) | minor | `0.13.3` → `0.14.3` | --- > ⚠️ **Warning** > > Some dependencies could not be looked up. Check the Dependency Dashboard for more information. --- ### Release Notes <details> <summary>ollama/ollama (ollama/ollama)</summary> ### [`v0.14.3`](https://github.com/ollama/ollama/releases/tag/v0.14.3) [Compare Source](https://github.com/ollama/ollama/compare/v0.14.2...v0.14.3) <img width="2904" height="1420" alt="Ollama screenshot 2026-01-20 at 23 41 54@&#8203;2x" src="https://github.com/user-attachments/assets/ae16dbc5-5b2b-45fd-ae03-ba15f2721c3c" /> - [Z-Image Turbo](https://ollama.com/x/z-image-turbo): 6 billion parameter text-to-image model from Alibaba’s Tongyi Lab. It generates high-quality photorealistic images. - [Flux.2 Klein](https://ollama.com/x/flux2-klein): Black Forest Labs’ fastest image-generation models to date. #### New models - [GLM-4.7-Flash](https://ollama.com/library/glm-4.7-flash): As the strongest model in the 30B class, GLM-4.7-Flash offers a new option for lightweight deployment that balances performance and efficiency. - [LFM2.5-1.2B-Thinking](https://ollama.com/library/lfm2.5-thinking): LFM2.5 is a new family of hybrid models designed for on-device deployment. #### What's Changed - Fixed issue where Ollama's macOS app would interrupt system shutdown - Fixed `ollama create` and `ollama show` commands for experimental models - The `/api/generate` API can now be used for image generation - Fixed minor issues in Nemotron-3-Nano tool parsing - Fixed issue where removing an image generation model would cause it to first load - Fixed issue where `ollama rm` would only stop the first model in the list if it were running **Full Changelog**: <https://github.com/ollama/ollama/compare/v0.14.2...v0.14.3> ### [`v0.14.2`](https://github.com/ollama/ollama/releases/tag/v0.14.2) [Compare Source](https://github.com/ollama/ollama/compare/v0.14.1...v0.14.2) #### New models - [TranslateGemma](https://ollama.com/library/translategemma): A new collection of open translation models built on Gemma 3, helping people communicate across 55 languages. #### What's Changed - <kbd>Shift</kbd> + <kbd>Enter</kbd> (or <kbd>Ctrl</kbd> + <kbd>j</kbd>) will now enter a newline in Ollama's CLI - Improve `/v1/responses` API to better confirm to OpenResponses specification #### New Contributors - [@&#8203;yuhongsun96](https://github.com/yuhongsun96) made their first contribution in [#&#8203;13135](https://github.com/ollama/ollama/pull/13135) - [@&#8203;koaning](https://github.com/koaning) made their first contribution in [#&#8203;13326](https://github.com/ollama/ollama/pull/13326) **Full Changelog**: <https://github.com/ollama/ollama/compare/v0.14.1...v0.14.2> ### [`v0.14.1`](https://github.com/ollama/ollama/releases/tag/v0.14.1) [Compare Source](https://github.com/ollama/ollama/compare/v0.14.0...v0.14.1) #### Image generation models (experimental) Experimental image generation models are available for **macOS** and **Linux (CUDA)** in Ollama: ##### Available models - [Z-Image-Turbo](https://ollama.com/x/z-image-turbo) ``` ollama run x/z-image-turbo ``` > **Note**: [`x`](https://ollama.com/x) is a username on ollama.com where experimental models are uploaded More models coming soon: 1. Qwen-Image-2512 2. Qwen-Image-Edit-2511 3. GLM-Image #### What's Changed - fix macOS auto-update signature verification failure #### New Contributors - [@&#8203;joshxfi](https://github.com/joshxfi) made their first contribution in [#&#8203;13711](https://github.com/ollama/ollama/pull/13711) - [@&#8203;maternion](https://github.com/maternion) made their first contribution in [#&#8203;13709](https://github.com/ollama/ollama/pull/13709) **Full Changelog**: <https://github.com/ollama/ollama/compare/v0.14.0...v0.14.1> ### [`v0.14.0`](https://github.com/ollama/ollama/releases/tag/v0.14.0) [Compare Source](https://github.com/ollama/ollama/compare/v0.13.5...v0.14.0) #### What's Changed - `ollama run --experimental` CLI will now open a new Ollama CLI that includes an agent loop and the `bash` tool - Anthropic API compatibility: support for the `/v1/messages` API - A new `REQUIRES` command for the `Modelfile` allows declaring which version of Ollama is required for the model - For older models, Ollama will avoid an integer underflow on low VRAM systems during memory estimation - More accurate VRAM measurements for AMD iGPUs - Ollama's app will now highlight swift source code - An error will now return when embeddings return `NaN` or `-Inf` - Ollama's Linux install bundles files now use `zst` compression - New experimental support for image generation models, powered by MLX #### New Contributors - [@&#8203;Vallabh-1504](https://github.com/Vallabh-1504) made their first contribution in [#&#8203;13550](https://github.com/ollama/ollama/pull/13550) - [@&#8203;majiayu000](https://github.com/majiayu000) made their first contribution in [#&#8203;13596](https://github.com/ollama/ollama/pull/13596) - [@&#8203;harrykiselev](https://github.com/harrykiselev) made their first contribution in [#&#8203;13615](https://github.com/ollama/ollama/pull/13615) **Full Changelog**: <https://github.com/ollama/ollama/compare/v0.13.5...v0.14.0-rc2> ### [`v0.13.5`](https://github.com/ollama/ollama/releases/tag/v0.13.5) [Compare Source](https://github.com/ollama/ollama/compare/v0.13.4...v0.13.5) #### New Models - Google's [FunctionGemma](https://ollama.com/library/functiongemma) a specialized version of Google's Gemma 3 270M model fine-tuned explicitly for function calling. #### What's Changed - `bert` architecture models now run on Ollama's engine - Added built-in renderer & tool parsing capabilities for DeepSeek-V3.1 - Fixed issue where nested properties in tools may not have been rendered properly #### New Contributors - [@&#8203;familom](https://github.com/familom) made their first contribution in [#&#8203;13220](https://github.com/ollama/ollama/pull/13220) - [@&#8203;nathannewyen](https://github.com/nathannewyen) made their first contribution in [#&#8203;13469](https://github.com/ollama/ollama/pull/13469) **Full Changelog**: <https://github.com/ollama/ollama/compare/v0.13.4...v0.13.5> ### [`v0.13.4`](https://github.com/ollama/ollama/releases/tag/v0.13.4) [Compare Source](https://github.com/ollama/ollama/compare/v0.13.3...v0.13.4) #### New Models - [Nemotron 3 Nano](https://ollama.com/library/nemotron-3-nano): A new Standard for Efficient, Open, and Intelligent Agentic Models - [Olmo 3](https://ollama.com/library/olmo-3) and [Olmo 3.1](https://ollama.com/library/olmo-3.1): A series of Open language models designed to enable the science of language models. These models are pre-trained on the Dolma 3 dataset and post-trained on the Dolci datasets. #### What's Changed - Enable Flash Attention automatically for models by default - Fixed handling of long contexts with Gemma 3 models - Fixed issue that would occur with Gemma 3 QAT models or other models imported with the Gemma 3 architecture #### New Contributors - [@&#8203;familom](https://github.com/familom) made their first contribution in [#&#8203;13220](https://github.com/ollama/ollama/pull/13220) **Full Changelog**: <https://github.com/ollama/ollama/compare/v0.13.3...v0.13.4-rc0> </details> --- ### Configuration 📅 **Schedule**: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied. ♻ **Rebasing**: Whenever PR is behind base branch, or you tick the rebase/retry checkbox. 🔕 **Ignore**: Close this PR and you won't be reminded about this update again. --- - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box --- This PR has been generated by [Renovate Bot](https://github.com/renovatebot/renovate). <!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0Mi42OS4yIiwidXBkYXRlZEluVmVyIjoiNDIuNjkuMiIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOltdfQ==-->
renovate-bot added 1 commit 2026-01-23 21:08:51 +00:00
chore(deps): update dependency ollama/ollama to v0.14.3
All checks were successful
lint-test-helm / lint-helm (pull_request) Successful in 14s
317a1cce95
renovate-bot force-pushed renovate/ollama-ollama-0.x from 317a1cce95 to 9932edf9bc 2026-01-23 22:16:32 +00:00 Compare
renovate-bot force-pushed renovate/ollama-ollama-0.x from 9932edf9bc to 6dcfff9fc4 2026-01-23 22:50:36 +00:00 Compare
renovate-bot force-pushed renovate/ollama-ollama-0.x from 6dcfff9fc4 to 455db82003 2026-01-23 23:06:06 +00:00 Compare
renovate-bot changed title from chore(deps): update dependency ollama/ollama to v0.14.3 to chore(deps): update dependency ollama/ollama to v0.14.3 - autoclosed 2026-01-23 23:17:06 +00:00
renovate-bot closed this pull request 2026-01-23 23:17:08 +00:00
All checks were successful
lint-test-helm / lint-helm (pull_request) Successful in 2m17s
render-manifests-automerge / render-manifests-automerge (pull_request) Has been skipped
render-manifests-merge / render-manifests-merge (pull_request) Has been skipped

Pull request closed

Sign in to join this conversation.