Files
site-documentation/src/content/docs/applications/ollama.mdx
Alex Lebens 3b2b38b625
All checks were successful
test-build / guarddog (push) Successful in 2m14s
test-build / build (push) Successful in 2m17s
release-image / guarddog (push) Successful in 45s
release-image / build (push) Successful in 1m42s
release-image / semantic-release (push) Successful in 48s
release-image / release-harbor (push) Successful in 8m54s
release-image / release-gitea (push) Successful in 3m20s
renovate / renovate (push) Successful in 1m3s
feat: add notes to all applications
2026-04-11 18:48:55 -05:00

28 lines
816 B
Plaintext

---
title: Ollama
description: Get up and running with Kimi-K2.5, GLM-5, MiniMax, DeepSeek, gpt-oss, Qwen, Gemma and other models.
hero:
tagline: Get up and running with Kimi-K2.5, GLM-5, MiniMax, DeepSeek, gpt-oss, Qwen, Gemma and other models.
image:
file: https://cdn.jsdelivr.net/gh/selfhst/icons@main/webp/ollama.webp
actions:
- text: Source
link: https://github.com/ollama/ollama
icon: right-arrow
- text: Deployment Chart
link: https://gitea.alexlebens.dev/alexlebens/infrastructure/src/branch/main/clusters/cl01tl/helm/ollama
icon: right-arrow
---
# Purpose
Local AI servers.
# Notes
[Open WebUI](https://github.com/open-webui/open-webui) frontend.
Configured mostly to run the Gemma models.
Tailscale connction to the desktop GPU for larger model processing.