Compare commits

...

513 Commits

Author SHA1 Message Date
72da712906 change env
Some checks failed
process-repository / process-repository (push) Failing after 13s
renovate / renovate (push) Successful in 36s
2025-07-15 23:05:37 -05:00
c7871ee4b6 change env
All checks were successful
renovate / renovate (push) Successful in 1m8s
2025-07-15 23:01:36 -05:00
3d6d0a1cfb change env
All checks were successful
renovate / renovate (push) Successful in 33s
2025-07-15 22:31:37 -05:00
b46e63218d enable dispatch
All checks were successful
renovate / renovate (push) Successful in 28s
2025-07-15 22:02:15 -05:00
d37c77f244 change paths
Some checks failed
renovate / renovate (push) Successful in 36s
process-repository / process-repository (push) Failing after 9s
2025-07-14 22:31:34 -05:00
3a1a432005 use single workflow script
Some checks failed
process-repository / process-repository (push) Failing after 18s
renovate / renovate (push) Successful in 43s
process-pull-requests / process-pull-requests (push) Failing after 13s
2025-07-13 23:44:36 -05:00
77a3e4a926 use tag ids
All checks were successful
process-pull-requests / process-pull-requests (push) Successful in 11s
process-issues / process-issues (push) Successful in 10s
renovate / renovate (push) Successful in 32s
2025-07-11 21:47:57 -05:00
b88454612b Merge pull request 'Update ghcr.io/squat/generic-device-plugin:latest Docker digest to 1f77944' (#109) from renovate/ghcr.io-squat-generic-device-plugin-latest into main
Some checks failed
release-charts-generic-device-plugin / release (push) Successful in 22s
process-pull-requests / process-pull-requests (push) Successful in 20s
renovate / renovate (push) Successful in 37s
process-issues / process-issues (push) Failing after 17s
Reviewed-on: #109
2025-07-11 01:44:17 +00:00
57e5184bee Update ghcr.io/squat/generic-device-plugin:latest Docker digest to 1f77944
All checks were successful
lint-and-test / lint-test (pull_request) Successful in 30s
2025-07-11 00:00:53 +00:00
a789214d01 Merge pull request 'Update cloudflare/cloudflared Docker tag to v2025.7.0' (#108) from renovate/cloudflare-cloudflared-2025.x into main
Some checks failed
release-charts-cloudflared / release (push) Successful in 26s
process-pull-requests / process-pull-requests (push) Successful in 12s
process-issues / process-issues (push) Failing after 12s
renovate / renovate (push) Successful in 41s
Reviewed-on: #108
2025-07-05 04:53:55 +00:00
cbe22fc5e4 Update cloudflare/cloudflared Docker tag to v2025.7.0
All checks were successful
lint-and-test / lint-test (pull_request) Successful in 36s
2025-07-04 00:03:54 +00:00
617fcc0ef8 update common chart
Some checks failed
release-charts-generic-device-plugin / release (push) Successful in 18s
process-pull-requests / process-pull-requests (push) Successful in 12s
process-issues / process-issues (push) Failing after 12s
renovate / renovate (push) Successful in 43s
2025-06-28 16:24:45 -05:00
b9727e4afc update workflows 2025-06-28 16:24:19 -05:00
e5c767b6c5 update common chart
All checks were successful
renovate / renovate (push) Successful in 50s
release-charts-cloudflared / release (push) Successful in 33s
2025-06-28 16:15:13 -05:00
f95dd80e3a Merge pull request 'Update ghcr.io/renovatebot/renovate Docker tag to v41' (#106) from renovate/ghcr.io-renovatebot-renovate-41.x into main
Some checks failed
process-pull-requests / process-pull-requests (push) Successful in 13s
process-issues / process-issues (push) Failing after 11s
renovate / renovate (push) Successful in 48s
Reviewed-on: #106
2025-06-20 04:49:47 +00:00
a56d7a435c Update ghcr.io/renovatebot/renovate Docker tag to v41
Some checks failed
lint-and-test / lint-test (pull_request) Failing after 2s
2025-06-20 04:32:59 +00:00
222a273671 add tags
Some checks failed
renovate / renovate (push) Successful in 27s
process-pull-requests / process-pull-requests (push) Successful in 7s
process-issues / process-issues (push) Failing after 7s
2025-06-18 22:48:05 -05:00
c4345f3e7b Merge pull request 'Update cloudflare/cloudflared Docker tag to v2025.6.1' (#105) from renovate/cloudflare-cloudflared-2025.x into main
All checks were successful
release-charts-cloudflared / release (push) Successful in 45s
process-pull-requests / process-pull-requests (push) Successful in 8s
renovate / renovate (push) Successful in 1m34s
process-issues / process-issues (push) Successful in 7s
Reviewed-on: #105
2025-06-18 05:18:28 +00:00
be5dee1fd8 Update cloudflare/cloudflared Docker tag to v2025.6.1
All checks were successful
lint-and-test / lint-test (pull_request) Successful in 42s
2025-06-18 00:01:27 +00:00
595f234afa fix workflows
All checks were successful
process-pull-requests / process-pull-requests (push) Successful in 6s
process-issues / process-issues (push) Successful in 6s
renovate / renovate (push) Successful in 1m11s
2025-06-12 14:45:29 -05:00
6214d8a397 update dependency
All checks were successful
renovate / renovate (push) Successful in 35s
release-charts-cloudflared / release (push) Successful in 17s
release-charts-generic-device-plugin / release (push) Successful in 17s
2025-06-12 14:43:29 -05:00
69ab6f82a0 fix workflwos
All checks were successful
renovate / renovate (push) Successful in 42s
2025-06-12 13:18:28 -05:00
376ea6ee88 bump version
All checks were successful
release-charts-cloudflared / release (push) Successful in 27s
renovate / renovate (push) Successful in 44s
2025-06-12 13:09:19 -05:00
1c9b2e93f4 update dependency
Some checks failed
release-charts-generic-device-plugin / release (push) Successful in 29s
release-charts-cloudflared / release (push) Successful in 29s
renovate / renovate (push) Has been cancelled
2025-06-12 13:08:51 -05:00
83ef3d23cb update dependency 2025-06-12 13:08:09 -05:00
8f2c262845 Merge pull request 'Update cloudflare/cloudflared Docker tag to v2025.6.0' (#103) from renovate/cloudflare-cloudflared-2025.x into main
All checks were successful
release-charts-cloudflared / release (push) Successful in 19s
renovate / renovate (push) Successful in 33s
Reviewed-on: #103
2025-06-12 18:07:04 +00:00
4f9ab170f4 update app version
All checks were successful
lint-and-test / lint-test (pull_request) Successful in 1m8s
2025-06-12 13:05:20 -05:00
ad5d06b065 Update cloudflare/cloudflared Docker tag to v2025.6.0
All checks were successful
lint-and-test / lint-test (pull_request) Successful in 1m44s
2025-06-12 00:00:49 +00:00
50cf277ecb remove workflow
All checks were successful
process-issues / process-issues (push) Successful in 7s
process-pull-requests / process-pull-requests (push) Successful in 7s
renovate / renovate (push) Successful in 42s
2025-06-10 16:50:55 -05:00
e4795f1041 add new workflows
All checks were successful
renovate / renovate (push) Successful in 27s
2025-06-10 16:46:38 -05:00
dc64cb498e always run on pr
Some checks failed
renovate / renovate (push) Successful in 27s
tag-old-issues / tag-old-issues (push) Failing after 1m9s
2025-06-09 13:35:20 -05:00
9646667d75 fix repo
All checks were successful
renovate / renovate (push) Successful in 43s
2025-06-09 12:57:22 -05:00
1b68fcabf5 limit repo
All checks were successful
renovate / renovate (push) Successful in 47s
2025-06-09 12:56:37 -05:00
d95b7ef6ac add workflow to tag old issues
All checks were successful
renovate / renovate (push) Successful in 1m21s
2025-06-09 12:32:25 -05:00
8f92b4b3ef downgrade priority
All checks were successful
renovate / renovate (push) Successful in 3m26s
2025-06-08 23:28:51 -05:00
2d04080009 fix topic
All checks were successful
renovate / renovate (push) Successful in 1m23s
2025-06-08 23:24:07 -05:00
b63140e74f change ntfy workflow
All checks were successful
renovate / renovate (push) Successful in 6m54s
2025-06-08 23:03:31 -05:00
e430d3fe32 fix url
All checks were successful
renovate / renovate (push) Successful in 2m14s
2025-06-08 19:01:14 -05:00
8e748b7084 change lint test
All checks were successful
renovate / renovate (push) Successful in 2m15s
2025-06-07 18:17:27 -05:00
f339e8698c fix argument
All checks were successful
renovate / renovate (push) Successful in 2m40s
2025-06-06 18:24:15 -05:00
fbc9293355 add option
All checks were successful
renovate / renovate (push) Successful in 1m49s
2025-06-06 18:08:44 -05:00
2371aeb612 add bumpversion
All checks were successful
renovate / renovate (push) Successful in 3m54s
2025-06-06 17:47:53 -05:00
799340aa3b change naming
All checks were successful
release-charts-gitea-actions / release (push) Successful in 17s
renovate / renovate (push) Successful in 1m55s
2025-06-06 14:58:59 -05:00
9da5f721c7 fix missing values
All checks were successful
release-charts-gitea-actions / release (push) Successful in 17s
renovate / renovate (push) Successful in 1m50s
2025-06-06 14:47:34 -05:00
aa919178a4 change name
All checks were successful
release-charts-gitea-actions / release (push) Successful in 15s
renovate / renovate (push) Successful in 1m46s
2025-06-06 14:31:50 -05:00
55e878d517 remove unused values
All checks were successful
release-charts-gitea-actions / release (push) Successful in 15s
renovate / renovate (push) Successful in 1m12s
2025-06-06 14:23:05 -05:00
3683209b23 release chart
All checks were successful
release-charts-gitea-actions / release (push) Successful in 33s
renovate / renovate (push) Successful in 2m18s
2025-06-06 14:14:54 -05:00
2be7e3789c add release workflow
All checks were successful
renovate / renovate (push) Successful in 1m5s
2025-06-06 14:07:50 -05:00
f5bb3e2403 add gitea actions
Some checks failed
renovate / renovate (push) Has been cancelled
2025-06-06 14:05:44 -05:00
0ef4b6ba3c upgrade chart
All checks were successful
release-charts-generic-device-plugin / release (push) Successful in 14s
renovate / renovate (push) Successful in 1m16s
2025-06-04 21:08:11 -05:00
7f46106a10 add renovate
All checks were successful
renovate / renovate (push) Successful in 3m31s
2025-06-04 21:03:47 -05:00
71dbdbf9df bump chart version
All checks were successful
release-charts-postgres-cluster / release (push) Successful in 15s
2025-05-29 16:40:26 -05:00
1e17a769dc change default schedule recomend
Some checks failed
release-charts-cloudfbarman-cloudlared / release (push) Failing after 5s
release-charts-postgres-cluster / release (push) Successful in 15s
2025-05-28 14:45:19 -05:00
78024a129f fix sync issues
All checks were successful
release-charts-postgres-cluster / release (push) Successful in 24s
2025-05-24 12:42:38 -05:00
5cca3b2717 add barman
All checks were successful
release-charts-postgres-cluster / release (push) Successful in 17s
2025-05-24 12:38:46 -05:00
a70137cfbd fix serername
All checks were successful
release-charts-postgres-cluster / release (push) Successful in 21s
2025-05-24 12:07:30 -05:00
dc4df55373 fix client mountg
All checks were successful
release-charts-cloudfbarman-cloudlared / release (push) Successful in 35s
2025-05-24 12:02:30 -05:00
a3f42e13ce fix client mount
All checks were successful
release-charts-cloudfbarman-cloudlared / release (push) Successful in 28s
2025-05-24 11:57:30 -05:00
a48262f115 upgrade chart
All checks were successful
release-charts-cloudfbarman-cloudlared / release (push) Successful in 16s
2025-05-24 11:52:07 -05:00
bd458a3a3d fix service account
All checks were successful
release-charts-cloudfbarman-cloudlared / release (push) Successful in 27s
2025-05-24 11:49:16 -05:00
3aa9113d24 fix service account
All checks were successful
release-charts-cloudfbarman-cloudlared / release (push) Successful in 19s
2025-05-24 11:45:45 -05:00
1fe8881dfb update values
All checks were successful
release-charts-cloudfbarman-cloudlared / release (push) Successful in 21s
2025-05-24 11:41:21 -05:00
fa6067e68b add workflow
All checks were successful
release-charts-cloudfbarman-cloudlared / release (push) Successful in 14s
2025-05-24 11:37:32 -05:00
8a50f22e31 add barman 2025-05-24 11:35:29 -05:00
deaa0c94d8 add default endpoint
All checks were successful
release-charts-postgres-cluster / release (push) Successful in 44s
2025-05-24 03:16:01 -05:00
e251ff65ef add default endpoint
All checks were successful
release-charts-postgres-cluster / release (push) Successful in 23s
2025-05-24 03:12:17 -05:00
245212e878 fix issues, no default backups
All checks were successful
release-charts-postgres-cluster / release (push) Successful in 18s
2025-05-24 03:09:47 -05:00
a7150e1d20 fix boolean
All checks were successful
release-charts-postgres-cluster / release (push) Successful in 39s
2025-05-24 02:15:47 -05:00
8d67cc9209 change values handling in backup
All checks were successful
release-charts-postgres-cluster / release (push) Successful in 12s
2025-05-24 02:07:42 -05:00
e57f859564 change method
All checks were successful
release-charts-postgres-cluster / release (push) Successful in 18s
2025-05-24 01:41:10 -05:00
e98973b467 fix name helper
All checks were successful
release-charts-postgres-cluster / release (push) Successful in 17s
2025-05-24 01:35:57 -05:00
cb5c199d03 fix name helper
Some checks failed
release-charts-postgres-cluster / release (push) Failing after 18s
2025-05-24 01:19:55 -05:00
df4bb2acd7 fix name helper
All checks were successful
release-charts-postgres-cluster / release (push) Successful in 19s
2025-05-24 01:14:01 -05:00
7f494fcc1e fix name helper
All checks were successful
release-charts-postgres-cluster / release (push) Successful in 25s
2025-05-24 01:12:11 -05:00
337aee6940 fix name helper
All checks were successful
release-charts-postgres-cluster / release (push) Successful in 24s
2025-05-24 01:09:03 -05:00
74c2bca3ae fix name helper
All checks were successful
release-charts-postgres-cluster / release (push) Successful in 36s
2025-05-24 01:06:02 -05:00
e1a2ee71f8 fix name helper
All checks were successful
release-charts-postgres-cluster / release (push) Successful in 21s
2025-05-24 01:03:23 -05:00
37478087d4 fix name helper
All checks were successful
release-charts-postgres-cluster / release (push) Successful in 32s
2025-05-24 01:01:06 -05:00
9af2f7d52a fix name helper
All checks were successful
release-charts-postgres-cluster / release (push) Successful in 27s
2025-05-24 00:57:59 -05:00
ab89f723a7 fix name helper
All checks were successful
release-charts-postgres-cluster / release (push) Successful in 24s
2025-05-24 00:54:42 -05:00
884cae31a3 update to use object store crd
Some checks failed
release-charts-postgres-cluster / release (push) Failing after 1m9s
2025-05-24 00:39:04 -05:00
9c2afe436d add ntfy action 2025-05-17 20:55:52 -05:00
e0b707fa32 upgrade common chart
All checks were successful
release-charts-cloudflared / release (push) Successful in 36s
2025-05-16 15:54:03 -05:00
2b02da90fd update image
All checks were successful
release-charts-cloudflared / release (push) Successful in 31s
2025-05-15 20:21:36 -05:00
225ffc6c7e update image
All checks were successful
release-charts-postgres-cluster / release (push) Successful in 13s
2025-05-14 23:07:11 -05:00
fa470296b9 fix recovery method
All checks were successful
release-charts-postgres-cluster / release (push) Successful in 12s
2025-05-14 13:29:36 -05:00
336a6f2815 change check
All checks were successful
release-charts-postgres-cluster / release (push) Successful in 13s
2025-05-13 21:10:49 -05:00
406737ed6a fix cluster name
All checks were successful
release-charts-postgres-cluster / release (push) Successful in 20s
2025-05-13 21:04:25 -05:00
ffcd5139ef change labels
All checks were successful
release-charts-postgres-cluster / release (push) Successful in 47s
2025-05-13 20:58:06 -05:00
69a554bd9d change include
All checks were successful
release-charts-postgres-cluster / release (push) Successful in 18s
2025-05-13 20:40:33 -05:00
2aacb4115a change include
All checks were successful
release-charts-postgres-cluster / release (push) Successful in 29s
2025-05-13 20:37:30 -05:00
56d7b063bd change helpers
All checks were successful
release-charts-postgres-cluster / release (push) Successful in 39s
2025-05-13 20:29:07 -05:00
1ca985edc7 rebase this chart on cnpg provided chart
All checks were successful
release-charts-postgres-cluster / release (push) Successful in 18s
2025-05-13 00:14:16 -05:00
47d7604aac change repo
All checks were successful
release-charts-cloudflared / release (push) Successful in 15s
2025-05-01 22:14:19 -05:00
ecf6e80a20 update image
Some checks failed
release-charts-cloudflared / release (push) Failing after 7s
2025-05-01 22:06:54 -05:00
f6bc5f42a5 change release
All checks were successful
release-charts-cloudflared / release (push) Successful in 18s
2025-04-11 15:52:31 -05:00
1b28dbf3db update image
All checks were successful
release-charts-cloudflared / release (push) Successful in 1m19s
2025-04-03 22:11:32 -05:00
0f2d18fc7a update repo config 2025-03-20 01:15:16 -05:00
0c093bd754 update workflows 2025-03-19 12:14:02 -05:00
0c8d26e3eb organize 2025-03-14 21:41:10 -05:00
82d93fc450 change config 2025-03-14 20:32:55 -05:00
2657f162c4 proper path 2025-03-14 16:07:39 -05:00
b7d53203da add github workflow 2025-03-14 16:05:53 -05:00
21a646dabd add name 2025-03-14 15:53:55 -05:00
0d15a1dadd change tag 2025-03-14 15:52:37 -05:00
a7fe403702 change env 2025-03-14 15:47:53 -05:00
34957e0c18 export env proper 2025-03-14 15:45:08 -05:00
a9286227f7 use workflow 2025-03-14 15:38:00 -05:00
3f6faacaa1 change dir 2025-03-14 15:34:09 -05:00
5817f674f4 remove github workflow 2025-03-14 15:33:08 -05:00
2786520504 extract metadata 2025-03-14 15:31:38 -05:00
c93f608874 change paths 2025-03-14 15:21:56 -05:00
4164f50bce update common chart 2025-03-14 15:21:04 -05:00
c060846f7b add plugin 2025-03-14 15:18:10 -05:00
673a8c686f use push 2025-03-14 15:15:39 -05:00
707cb159b9 change path 2025-03-14 15:12:39 -05:00
90a61573bc convert to use gitea docs 2025-03-14 15:06:19 -05:00
ad1fa6786a disable prov 2025-03-14 14:59:26 -05:00
28ed0e8735 fix path 2025-03-14 14:53:51 -05:00
0e3de3cca7 build helm depend 2025-03-14 14:37:31 -05:00
53f37bc75a update workflows 2025-03-14 14:34:07 -05:00
01d96d9a25 add path 2025-03-14 14:33:51 -05:00
76823dc414 update common
Some checks failed
Release Charts / release (push) Failing after 20s
2025-03-14 13:30:26 -05:00
f97b6ab657 change workflow 2025-03-14 13:23:53 -05:00
4bee2a675c update image
Some checks failed
Release Charts / release (push) Failing after 25s
2025-03-14 13:10:01 -05:00
0094b5611f add workflows 2025-03-14 12:26:23 -05:00
bb7fb1eadb disable workflows 2025-03-14 11:13:28 -05:00
99ed8cce53 change config 2025-03-13 23:02:05 -05:00
02bec682c2 update library chart 2025-03-05 17:56:08 -06:00
c549882df9 update image 2025-03-03 11:17:13 -06:00
e28f44b697 update image 2025-03-03 11:16:15 -06:00
78afcf24d3 update version 2025-02-26 13:57:44 -06:00
86e87dbbba add dep name 2025-02-26 13:55:58 -06:00
39134cbd95 use deb version 2025-02-26 13:54:41 -06:00
9f66bd588c remove days 2025-02-26 13:38:48 -06:00
81aac4790e update image 2025-02-17 20:19:32 -06:00
renovate[bot]
94b6b4b0fb Update helm/chart-releaser-action action to v1.7.0 (#76)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-02-17 20:17:54 -06:00
renovate[bot]
27edd0a1ef Update helm/chart-testing-action action to v2.7.0 (#77)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-02-17 20:17:48 -06:00
94184ea569 update chart 2025-02-17 20:17:36 -06:00
08473fc265 update image 2025-02-17 20:16:25 -06:00
81d3ecf237 adjust schedule 2025-01-30 21:08:38 -06:00
8392d67790 update chart 2025-01-17 17:23:14 -06:00
3f06bf148c update image 2025-01-17 17:22:00 -06:00
5259488c05 chagne resources 2025-01-08 17:39:10 -06:00
09c693d371 reduce resource request 2025-01-08 15:50:21 -06:00
ec6f44c6bc change resource 2025-01-08 15:33:59 -06:00
35f331e29a fix helm/prom bracket interaction 2025-01-08 15:20:28 -06:00
3b0481fcb1 add default rules 2025-01-07 14:22:25 -06:00
e2dfd70dc4 change default resources 2025-01-07 13:45:34 -06:00
ffc253ef7d add description of values 2024-12-30 17:10:54 -06:00
77dd85362e update dependency chart 2024-12-30 17:04:09 -06:00
d5bb83bf84 add description of values 2024-12-30 17:03:45 -06:00
11d3dd927b update dependency chart 2024-12-30 17:00:37 -06:00
1b67b5cbb6 add description of values 2024-12-30 16:59:49 -06:00
56fe199fb9 add precommit hooks 2024-12-30 16:55:01 -06:00
8ec7f590b2 upgrade base image to 17 2024-12-24 21:08:05 -06:00
d2444fb544 set switch for superuser 2024-12-22 17:29:30 -06:00
202a534e8e fix missing field 2024-12-21 23:48:11 -06:00
c36e4e371f reorganize values 2024-12-21 23:40:21 -06:00
1ac9444bb2 fix condition flow 2024-12-21 23:29:50 -06:00
275fcd8568 use cluster values 2024-12-21 23:26:40 -06:00
158d4ca676 change method 2024-12-21 23:22:34 -06:00
32e232d8e2 force hardcoded value for testing 2024-12-21 23:08:17 -06:00
93d2f916fb use value for name 2024-12-21 22:53:59 -06:00
b1a6a2fd39 remove condition 2024-12-21 22:46:17 -06:00
d3307d4f70 use different function 2024-12-21 22:39:52 -06:00
1b7018d3bd fix database naming 2024-12-21 22:31:00 -06:00
b75721ae1d add option to specifiy database name for replica 2024-12-21 22:20:09 -06:00
renovate[bot]
e0e4f6ee8a Update renovate/renovate Docker tag to v39 (#71)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-12-21 19:55:23 -06:00
renovate[bot]
7dd80d4528 Migrate config .github/renovate.json (#72)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-12-21 19:55:16 -06:00
24af841f19 update workflows 2024-12-21 18:11:39 -06:00
16211d4c62 remove schedule 2024-12-21 18:11:29 -06:00
513c46c957 change to midnight daily 2024-12-20 19:33:25 -06:00
3fad4e4ff0 update image 2024-12-20 19:25:40 -06:00
1f867e0276 update image 2024-12-20 19:25:03 -06:00
601790ab7a change backup schedule 2024-12-19 14:50:00 -06:00
16ebdda6a4 update image 2024-12-19 13:59:37 -06:00
dbf8f14512 update image 2024-12-19 13:58:37 -06:00
22dcd7a14c update image 2024-12-16 10:31:56 -06:00
8862d97c27 change retention policy 2024-12-12 11:12:58 -06:00
1f4cd543c0 bump chart version 2024-11-23 22:40:06 -06:00
4aac272e98 update image 2024-11-23 22:39:06 -06:00
b8602fb919 update image to 16.6 2024-11-23 22:38:36 -06:00
fb34897269 update image 2024-10-19 00:58:50 -05:00
ec27eff4da add priority class name and tolerations 2024-10-13 12:39:03 -05:00
2b31df483e listen on all addresses 2024-10-12 23:35:08 -05:00
53191f1d68 add generic device plugin 2024-10-12 23:18:07 -05:00
172526fb79 update common chart 2024-10-11 19:03:23 -05:00
5d5aad265a fix settings for tensorchord type 2024-09-28 16:43:45 -05:00
84af71da49 add tag for postgres version 2024-09-28 02:07:28 -05:00
ab3ca49103 add tensorchord type 2024-09-28 02:05:34 -05:00
8b2342d1c2 bump chart version 2024-09-27 21:29:54 -05:00
9107020db2 update chart and image 2024-09-27 21:28:05 -05:00
3ecef5f8d1 add options for tagging 2024-09-27 21:27:01 -05:00
renovate[bot]
e5b1b733fe Update cloudflare/cloudflared Docker tag to v2024.8.3 (#63)
* Update cloudflare/cloudflared Docker tag to v2024.8.3

* update chart

---------

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: alexlebens <alexanderlebens@gmail.com>
2024-08-24 01:30:19 -05:00
843e37e233 update postresql image 2024-08-19 16:42:54 -05:00
ee944a6b83 update image 2024-08-19 16:41:19 -05:00
renovate[bot]
5fe95ea7ad Update renovate/renovate Docker tag to v38 (#62)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-08-19 16:40:33 -05:00
6a33a670e1 update common chart 2024-08-19 16:40:16 -05:00
27cdfd742e remove mysql-cluster 2024-08-19 15:31:01 -05:00
9f68b30a31 change condition handling 2024-07-08 12:09:29 -05:00
668d50dfdb add conditional check for postinit 2024-07-04 22:52:02 -05:00
93a232947e increment chart 2024-07-04 22:45:41 -05:00
667236239d fix backup fields 2024-07-04 22:45:18 -05:00
875f0c143c fix backup fields 2024-07-04 22:41:31 -05:00
670b6e600c add conditional check for values 2024-07-01 18:08:23 -05:00
6f5b5ffcb4 change value inseration 2024-07-01 18:08:23 -05:00
renovate[bot]
295a7296bc Update cloudflare/cloudflared Docker tag to v2024.6.1 (#60)
* Update cloudflare/cloudflared Docker tag to v2024.6.1

* update chart

---------

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: alexlebens <alexanderlebens@gmail.com>
2024-06-28 17:01:04 -05:00
f1b4020287 change flow control 2024-06-22 18:26:19 -05:00
969357a664 change null handling 2024-06-22 18:22:25 -05:00
5685190e43 remove field not declared in schema 2024-06-22 18:18:03 -05:00
5e88f116fc disable rules by default 2024-06-22 17:58:43 -05:00
f99ebfaa44 change initdb keys 2024-06-14 21:37:00 -05:00
64e3612762 fix init keys 2024-06-14 21:30:54 -05:00
a6821995ca fix post init location 2024-06-14 21:23:48 -05:00
4291c3d18c add options for postgresql init 2024-06-14 21:17:45 -05:00
renovate[bot]
3f1fc33123 Update cloudflare/cloudflared Docker tag to v2024.6.0 (#59)
* Update cloudflare/cloudflared Docker tag to v2024.6.0

* bump chart version

---------

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: alexlebens <alexanderlebens@gmail.com>
2024-06-13 11:01:03 -05:00
fad13607e6 bump chart version 2024-06-13 10:58:31 -05:00
a1811097c0 add resources to values 2024-06-13 10:58:01 -05:00
6b850205ad move image to values 2024-06-13 10:56:49 -05:00
d075a47f03 remove hookshot 2024-06-13 10:53:52 -05:00
0a437d983d remove discord 2024-06-13 10:53:44 -05:00
7058201439 remove whatsapp 2024-06-13 10:53:29 -05:00
42cd8834b9 remove outline 2024-06-01 18:13:49 -05:00
2cda957b4c remove taiga 2024-06-01 18:13:42 -05:00
238d01c5e4 remove kublet cert 2024-05-30 11:59:29 -05:00
9f0fae9fdf remove qbittorrent 2024-05-30 11:59:13 -05:00
d2f062e3db remove unpackerr 2024-05-30 11:58:57 -05:00
a1c9367b6d remove penpot 2024-05-30 11:58:45 -05:00
9857d61093 change command 2024-05-28 15:28:00 -05:00
cfe7ebea99 force http2 connection 2024-05-28 15:09:36 -05:00
aface2b57d add cloudflared 2024-05-28 14:03:55 -05:00
8158d1689c remove cops 2024-05-28 13:12:05 -05:00
276921cf8a remove calibre server 2024-05-28 13:11:55 -05:00
e420e092c9 remove tdarr 2024-05-28 10:50:06 -05:00
e20049fc8c remove home-assistant 2024-05-27 22:00:05 -05:00
37ba06acc7 remove tubearchvist plugin 2024-05-27 21:27:28 -05:00
02228e31cc remove tubearchivist 2024-05-27 21:27:18 -05:00
6708443275 remove libation 2024-05-27 20:41:01 -05:00
987cedb98a remove homepage 2024-05-27 20:40:52 -05:00
7f0fd5d5c7 remove freshrss 2024-05-27 20:40:44 -05:00
d381bdee39 change renovate config 2024-05-27 20:40:35 -05:00
ed4a43cd31 add archive folder to ignore 2024-05-27 20:40:27 -05:00
1b01ed0ba2 remove registration secret 2024-05-26 15:43:20 -05:00
58151e21aa remove registration secret 2024-05-26 15:43:14 -05:00
3f2615097f remove registration 2024-05-26 15:31:12 -05:00
a8bbc84740 remove registration 2024-05-26 15:28:21 -05:00
a8b3615f2f change conf 2024-05-24 21:50:27 -05:00
590b095a32 change image version 2024-05-24 21:00:48 -05:00
5d2cdc9648 update dependencies 2024-05-20 12:13:43 -05:00
99c106bd63 update dependencies 2024-05-20 12:13:33 -05:00
e6938fe645 bump version 2024-05-20 12:12:00 -05:00
7f5d870579 update dependencies 2024-05-18 14:40:42 -05:00
6cf2db87f4 update dependencies 2024-05-18 14:40:13 -05:00
537d9bd125 update dependencies 2024-05-18 14:39:55 -05:00
9627287f30 update base image 2024-05-17 12:08:15 -05:00
dd724b5b32 update base image 2024-05-17 12:06:04 -05:00
cd91a16c75 pass destinationPath through to values 2024-05-16 17:19:41 -05:00
69900d3931 update image version 2024-05-16 13:57:33 -05:00
f80cec8c82 change renovate config 2024-05-16 13:51:53 -05:00
f3d629fe00 add namespace to authentik proxy 2024-05-16 13:44:28 -05:00
4d3574ffa8 add namespace to authentik proxy 2024-05-16 13:44:20 -05:00
f98268fd25 add namespace to authentik proxy 2024-05-16 13:44:09 -05:00
7514ea022e bump chart version 2024-05-16 13:15:13 -05:00
a65a0dbcec change timezone 2024-05-16 13:01:35 -05:00
6bc5aea01f update dependencies 2024-05-16 12:48:42 -05:00
80940910a9 update dependencies 2024-05-16 12:48:19 -05:00
6895b078b5 update image version 2024-05-16 12:47:41 -05:00
27e70a1786 update image version 2024-05-16 12:46:44 -05:00
de21d07a5d update image version 2024-05-16 12:45:49 -05:00
58cc48724b update image version 2024-05-16 12:45:15 -05:00
8a357574e9 update dependencies 2024-05-16 12:44:35 -05:00
220e9e011b update image version 2024-05-16 12:42:57 -05:00
9483523eb8 update dependencies 2024-05-16 12:42:04 -05:00
ca205a8802 update dependencies 2024-05-16 12:41:41 -05:00
36267ada6f update middleware api 2024-05-16 12:35:39 -05:00
153b7a1ad2 update middleware api 2024-05-16 12:35:27 -05:00
9b30408661 update middleware api 2024-05-16 12:35:04 -05:00
947120d73c fix backup schedule 2024-04-26 14:35:54 -06:00
a62e24142c add mysql cluster 2024-04-26 14:05:21 -06:00
03c825e816 change s3 path 2024-04-26 10:00:10 -06:00
38c2be01f9 remove kyoo 2024-04-25 12:45:25 -06:00
renovate[bot]
5ac88f9aa8 Update homeassistant/home-assistant Docker tag to v2024.4.4 (#44)
* Update homeassistant/home-assistant Docker tag to v2024.4.4

* update chart

---------

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: alexlebens <alexanderlebens@gmail.com>
2024-04-23 16:43:29 -06:00
renovate[bot]
3c3f1bdb76 Update Helm release redis to v19.1.3 (#43)
* Update Helm release redis to v19.1.3

* update chart versions

---------

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: alexlebens <alexanderlebens@gmail.com>
2024-04-23 15:57:32 -06:00
renovate[bot]
718acdc607 Update Helm release rabbitmq to v14.0.2 (#42)
* Update Helm release rabbitmq to v14.0.2

* update chart

* remove tailing whitespace

---------

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: alexlebens <alexanderlebens@gmail.com>
2024-04-23 03:34:32 -06:00
renovate[bot]
71a5d81c09 Update bbilly1/tubearchivist-jf Docker tag to v0.2.0 (#41)
* Update bbilly1/tubearchivist-jf Docker tag to v0.2.0

* update chart

---------

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: alexlebens <alexanderlebens@gmail.com>
2024-04-23 03:32:57 -06:00
renovate[bot]
e2d4c395e5 Update Helm release elasticsearch to v21 (#40)
* Update Helm release elasticsearch to v21

* update elastic search chart

---------

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: alexlebens <alexanderlebens@gmail.com>
2024-04-22 06:43:28 -06:00
fd611813b7 add annotations to deployment 2024-04-21 06:40:46 -06:00
ab5da15b10 remove lazy-librarian 2024-04-21 04:59:41 -06:00
e584566dde fix app version 2024-04-21 04:03:32 -06:00
f06aa3a175 add lazy-librarian 2024-04-21 03:59:25 -06:00
9abeba8f9d add penpot 2024-04-19 21:34:56 -06:00
1f498323a4 change postgres settings to import only password from secret 2024-04-19 05:41:38 -06:00
646e3a2c36 grant read to unlogged users 2024-04-19 05:32:28 -06:00
197ca6ef81 add quotes around default vhost of / 2024-04-19 05:22:14 -06:00
b8780a7339 add default vhost 2024-04-19 05:05:10 -06:00
b90968ea85 fix scanner image name 2024-04-19 05:01:23 -06:00
d3275f8067 create switch if various api keys are provided 2024-04-19 04:58:46 -06:00
649f362824 remove extra configuration from rabbitmq 2024-04-19 04:54:03 -06:00
732761d73b fix default vhost 2024-04-19 04:50:49 -06:00
0e7627cb7d fix oidc values path 2024-04-19 04:41:22 -06:00
d81c246b35 add kyoo 2024-04-19 04:08:55 -06:00
renovate[bot]
b97dd1f892 Update Helm release redis to v19.1.2 (#39)
* Update Helm release redis to v19.1.2

* update chart

---------

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: alexlebens <alexanderlebens@gmail.com>
2024-04-18 22:02:37 -06:00
0b8374753d change default cpu limit 2024-04-18 05:49:15 -06:00
cb29afdcb2 fix source server naming 2024-04-18 05:35:06 -06:00
4f366535c3 change default cpu limit 2024-04-18 04:35:42 -06:00
f32ef77551 add additional options for recovery 2024-04-18 03:43:52 -06:00
d02f649164 remove default option in bootstrap helper 2024-04-18 03:27:00 -06:00
3b50ca2bfe fix comparision operator position 2024-04-18 01:51:06 -06:00
17796a1183 increment chart version 2024-04-18 01:48:03 -06:00
512b1d4243 set default value for comparision 2024-04-18 01:47:46 -06:00
a2b0cdd5b6 fix ordering of comparision operator 2024-04-18 01:39:33 -06:00
e79af169b9 calculate length of array separately 2024-04-18 01:35:19 -06:00
661f9342b9 fix length measurement of database 2024-04-18 01:17:16 -06:00
9d1244c7a1 remove patch from image tag 2024-04-18 01:07:36 -06:00
0dc50bf88f change default cluster name to start with release 2024-04-17 20:01:47 -06:00
75accbbf87 use semver function to pull major version into cluster name 2024-04-17 20:00:06 -06:00
19fbd95a79 change templating for cluster naming 2024-04-17 19:45:08 -06:00
d73c42fd42 change default values 2024-04-17 19:15:54 -06:00
renovate[bot]
6399a8ca97 Update Helm release rabbitmq to v14 (#34)
* Update Helm release rabbitmq to v14

* update chart

* align comments for readability

---------

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: alexlebens <alexanderlebens@gmail.com>
2024-04-17 19:13:57 -06:00
renovate[bot]
580c7da73a Update Helm release redis to v19.1.1 (#18)
* Update Helm release redis to v19.1.1

* update charts

* fix indentation

---------

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: alexlebens <alexanderlebens@gmail.com>
2024-04-17 19:09:36 -06:00
renovate[bot]
11d47799f1 Update dock.mau.dev/mautrix/whatsapp Docker tag to v0.10.7 (#36)
* Update dock.mau.dev/mautrix/whatsapp Docker tag to v0.10.7

* update helm chart

* fix indentation

---------

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: alexlebens <alexanderlebens@gmail.com>
2024-04-17 19:05:30 -06:00
renovate[bot]
7d825da72d Update linuxserver/code-server Docker tag to v4.23.1 (#35)
* Update linuxserver/code-server Docker tag to v4.23.1

* update helm chart

---------

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: alexlebens <alexanderlebens@gmail.com>
2024-04-17 19:05:16 -06:00
renovate[bot]
adf49292bd Update halfshot/matrix-hookshot Docker tag to v5.3.0 (#38)
* Update halfshot/matrix-hookshot Docker tag to v5.3.0

* update chart

* fix linting errors

---------

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: alexlebens <alexanderlebens@gmail.com>
2024-04-17 19:03:21 -06:00
renovate[bot]
63e69df14a Update ghcr.io/gethomepage/homepage Docker tag to v0.8.12 (#37)
* Update ghcr.io/gethomepage/homepage Docker tag to v0.8.12

* update chart

---------

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Alex Lebens <alexanderlebens@gmail.com>
2024-04-17 18:55:36 -06:00
7bd8a4525a if oidc is enabled add an ingress path to the backend 2024-04-17 04:42:51 -06:00
a860789056 add env to front deployment about oidc enablement 2024-04-15 03:31:49 -06:00
58f89640a8 fix naming of changed rabbitmq charts 2024-04-15 02:47:45 -06:00
132e086d6d change rabbitmq chart naming to generate proper dns and app names 2024-04-15 02:44:10 -06:00
617505ee99 fix length of app port 2024-04-13 23:37:58 -06:00
34a21702ab fix http service value 2024-04-13 23:32:48 -06:00
15d3253af9 fix events app port to service and port 2024-04-13 23:28:03 -06:00
90970ef172 fix events health endpoint 2024-04-13 23:19:59 -06:00
0d6f789ffd increment chart version 2024-04-13 23:14:21 -06:00
f968776cd0 fix trello importer switch for async container 2024-04-13 23:13:44 -06:00
0b2beb08b7 fix indentation of events deployment 2024-04-13 23:11:44 -06:00
8fae31a679 properly enable/disable trello importer 2024-04-13 23:07:20 -06:00
f67ac05610 fix indentation 2024-04-13 23:05:03 -06:00
7803519d04 add major version value 2024-04-13 22:58:01 -06:00
55e63c2c72 fix minor formatting and remove uneeded values 2024-04-13 22:31:07 -06:00
6e083293bb fix minor formatting and remove uneeded values 2024-04-13 22:29:33 -06:00
60e427826c add taiga 2024-04-13 22:17:45 -06:00
f905b4ccfe change s3 env keys 2024-04-13 14:58:20 -06:00
487786455c change default credential secret name 2024-04-13 03:28:27 -06:00
585d39657a change how the cluster name is generated and used 2024-04-13 03:24:58 -06:00
e5e2812ed5 remove chart as functionality is now included in postgres-cluster chart 2024-04-13 02:46:38 -06:00
506218210e add replica import type 2024-04-13 02:41:46 -06:00
a7a08ef9f3 change chart to align with cnpg's chart 2024-04-13 02:06:02 -06:00
0fe94afd2a revert to redis image 2024-04-12 20:11:10 -06:00
renovate[bot]
73262aa60a Update homeassistant/home-assistant Docker tag to v2024.4.3 (#31)
* Update homeassistant/home-assistant Docker tag to v2024.4.3

* update chart

---------

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Alex Lebens <alexanderlebens@gmail.com>
2024-04-12 17:42:37 -06:00
a322553210 update readme 2024-04-11 19:37:45 -06:00
09aae9e79d change default cluster size to 3 2024-04-11 19:35:05 -06:00
c72c25a74d upgrade default postgres version to 16.2 2024-04-11 19:33:20 -06:00
9c93b1dc4a bump chart version 2024-04-11 19:31:56 -06:00
cfd426f657 bump chart version 2024-04-11 19:31:46 -06:00
93f4991a05 remove default as storage class 2024-04-11 19:30:58 -06:00
ce0f3c7b07 remove default as storage class 2024-04-11 19:30:50 -06:00
58c5443de1 add postgres-cluster-upgrade chart 2024-04-11 19:05:18 -06:00
b3acbf3cbc update redis chart to 19 2024-04-11 18:04:23 -06:00
3270a3102b change redis image to use valkey 2024-04-11 18:03:44 -06:00
acc9710c72 update redis chart to 19 2024-04-11 18:03:23 -06:00
756ef9b0c6 upgrade elasticsearch chart version 2024-04-11 17:44:53 -06:00
renovate[bot]
8baec6fd41 Update bbilly1/tubearchivist Docker tag to v0.4.7 (#30)
* Update bbilly1/tubearchivist Docker tag to v0.4.7

* update chart

---------

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Alex Lebens <alexanderlebens@gmail.com>
2024-04-11 17:38:23 -06:00
c1ab4afc46 add s3 accelerate env 2024-04-09 22:35:16 -06:00
bdcd63284a add url protocol to s3 endpoint 2024-04-09 20:16:43 -06:00
renovate[bot]
e8a951405d Update linuxserver/code-server Docker tag to v4.23.0 (#29)
* Update linuxserver/code-server Docker tag to v4.23.0

* update chart

---------

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Alex Lebens <alexanderlebens@gmail.com>
2024-04-08 18:19:18 -06:00
renovate[bot]
93caa67bad Update ghcr.io/gethomepage/homepage Docker tag to v0.8.11 (#28)
* Update ghcr.io/gethomepage/homepage Docker tag to v0.8.11

* update chart

---------

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Alex Lebens <alexanderlebens@gmail.com>
2024-04-08 11:30:31 -06:00
renovate[bot]
0dfaebdb7f Update homeassistant/home-assistant Docker tag to v2024.4.2 (#27)
* Update homeassistant/home-assistant Docker tag to v2024.4.2

* update chart

---------

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Alex Lebens <alexanderlebens@gmail.com>
2024-04-08 11:28:09 -06:00
renovate[bot]
2f721343aa Update homeassistant/home-assistant Docker tag to v2024.4.1 (#26)
* Update homeassistant/home-assistant Docker tag to v2024.4.1

* update chart

---------

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Alex Lebens <alexanderlebens@gmail.com>
2024-04-06 15:41:04 -06:00
270b62be53 add tubearchivist to jellyfin 2024-04-04 23:22:14 -06:00
0984e40cc8 remove workflow path 2024-04-04 14:53:35 -06:00
4e26a7c727 remove commands from bridges 2024-04-04 14:47:35 -06:00
17d146a444 add archive folder 2024-04-04 13:47:48 -06:00
323955129b add mautrix-whatsapp 2024-04-04 13:43:13 -06:00
d4eaeb7c21 add mautrix-discord 2024-04-04 13:36:39 -06:00
725e83af07 add port and host to database env 2024-04-04 11:48:22 -06:00
renovate[bot]
d58fbbd819 Update homeassistant/home-assistant Docker tag to v2024.4.0 (#25)
* Update homeassistant/home-assistant Docker tag to v2024.4.0

* update chart

---------

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Alex Lebens <alexanderlebens@gmail.com>
2024-04-03 20:15:46 -06:00
bab4c95580 update image version to latest 2024-04-01 09:28:19 -06:00
536b133850 add widgets 2024-03-30 16:47:42 -06:00
ead44d21f7 fix service account name 2024-03-30 14:48:51 -06:00
ff7fb92c19 create tempalte name for passFile 2024-03-30 14:43:13 -06:00
46effc5599 fix secret field schema 2024-03-30 14:32:22 -06:00
0f7a0d658f fix naming 2024-03-30 14:27:29 -06:00
08b0782645 add service monitor 2024-03-30 14:22:55 -06:00
9f7f83a40a change webhook path 2024-03-30 14:17:21 -06:00
b3f9c93fcb change chart name 2024-03-30 13:30:46 -06:00
b6bcae462f add matrix hookshot 2024-03-30 13:30:11 -06:00
renovate[bot]
70cbd7b60d Update Helm release tdarr-exporter to v1.1.1 (#20)
* Update Helm release tdarr-exporter to v1.1.1

* bump chart version

---------

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Alex Lebens <alexanderlebens@gmail.com>
2024-03-25 10:16:22 -06:00
renovate[bot]
ba065b36b2 Update ghcr.io/qdm12/gluetun Docker tag to v3.38.0 (#22)
* Update ghcr.io/qdm12/gluetun Docker tag to v3.38.0

* bump chart version

---------

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Alex Lebens <alexanderlebens@gmail.com>
2024-03-25 10:15:48 -06:00
renovate[bot]
cfc4d78b9f Update Helm release redis to v18.19.4 (#16)
* Update Helm release redis to v18.19.4

* bump chart version

---------

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Alex Lebens <alexanderlebens@gmail.com>
2024-03-25 10:12:55 -06:00
renovate[bot]
34e96804f4 Update ghcr.io/gethomepage/homepage Docker tag to v0.8.10 (#21)
* Update ghcr.io/gethomepage/homepage Docker tag to v0.8.10

* update chart

* remove whitespace

---------

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Alex Lebens <alexanderlebens@gmail.com>
2024-03-25 10:12:28 -06:00
renovate[bot]
3a8354635b Update Helm release tdarr-exporter to v1.1.0 (#17)
* Update Helm release tdarr-exporter to v1.1.0

* increment chart version

* remove trailing whitespace

* remove trailing whitespace

---------

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Alex Lebens <alexanderlebens@gmail.com>
2024-03-22 15:50:34 -06:00
renovate[bot]
fcba2d6011 Update homeassistant/home-assistant Docker tag to v2024.3.3 (#19)
* Update homeassistant/home-assistant Docker tag to v2024.3.3

* update chart version

---------

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Alex Lebens <alexanderlebens@gmail.com>
2024-03-22 15:42:11 -06:00
renovate[bot]
8db4555032 Update linuxserver/code-server Docker tag to v4.22.1 (#15)
* Update linuxserver/code-server Docker tag to v4.22.1

* update chart

---------

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Alex Lebens <alexanderlebens@gmail.com>
2024-03-18 01:30:05 -06:00
renovate[bot]
f22b33deba Update homeassistant/home-assistant Docker tag to v2024.3.1 (#14)
* Update homeassistant/home-assistant Docker tag to v2024.3.1

* update chart

---------

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Alex Lebens <alexanderlebens@gmail.com>
2024-03-18 01:28:21 -06:00
f73b754d9c fix values 2024-03-17 01:32:21 -06:00
f2e3dba5e2 remove probes 2024-03-16 09:56:06 -06:00
e89bd04a8d fix env merging 2024-03-16 09:52:14 -06:00
6f2550cf79 change env type 2024-03-16 09:41:49 -06:00
0c94180823 hardcode the exporter port 2024-03-16 09:36:56 -06:00
f59d77f8bc fix metrics switch 2024-03-16 09:34:09 -06:00
57983912f5 remove node api and probes 2024-03-16 09:29:07 -06:00
8a6cfef4c5 add server IP env 2024-03-16 09:21:32 -06:00
7c9a06dcee add existing secret name for gluetun 2024-03-16 09:12:46 -06:00
580f9efa06 add qbittorrent 2024-03-16 09:04:25 -06:00
c0b41a6d6c add tdarr 2024-03-16 08:16:38 -06:00
4efdc15832 add unpackerr 2024-03-16 07:25:27 -06:00
2dc9f33109 update chart details 2024-03-16 07:15:39 -06:00
d0255ca5d1 remove specific values 2024-03-16 05:44:05 -06:00
790ad5b440 update renovate config 2024-03-16 05:43:49 -06:00
9539635918 change repo 2024-03-16 04:38:43 -06:00
7c61825d5f enable enviroment variables 2024-03-16 04:11:46 -06:00
c2446ab6e2 add freshrss 2024-03-16 04:11:17 -06:00
120fbe05e6 fix mount location 2024-03-16 01:42:55 -06:00
e686771ce3 fix names 2024-03-16 01:34:46 -06:00
a5bd0b724a enable proxy auth for ha 2024-03-15 23:04:47 -06:00
35c7223d40 change tls secret name 2024-03-15 22:37:38 -06:00
32bda525a1 change proxy auth to code server 2024-03-15 22:36:45 -06:00
42231a40f4 add tubearchivist-to-jellyfin 2024-03-15 19:12:12 -06:00
76c6016a9e change redis image 2024-03-15 02:23:23 -06:00
d8e6ac1d7b fix env value 2024-03-15 02:19:03 -06:00
03d0cab454 update elasticsearch chart version 2024-03-15 02:16:26 -06:00
b149fbd85e add tubearchivist 2024-03-15 02:00:46 -06:00
97528e845d fix redis chart version 2024-03-15 01:57:19 -06:00
f04f777ec2 add cops 2024-03-14 23:58:52 -06:00
renovate[bot]
688d6498b0 Update homeassistant/home-assistant Docker tag to v2024.3.0 (#11)
* Update homeassistant/home-assistant Docker tag to v2024.3.0

---------

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Alex Lebens <alexanderlebens@gmail.com>
2024-03-14 17:11:41 -06:00
7d8c554354 change ingress and port name 2024-03-14 08:59:18 -06:00
b31dab5a46 inject db secrets into deployment 2024-03-14 08:52:39 -06:00
c485eb9682 change depcreciated env value 2024-03-14 08:51:37 -06:00
627f2ca6b6 enable configmap for s3 2024-03-14 08:46:52 -06:00
2b9ea0bcdb fix bootstrap switch 2024-03-14 06:32:41 -06:00
ba0c6fe7d2 fix values paths 2024-03-14 01:30:16 -06:00
6c11bf51b2 change cluster role names 2024-03-14 00:18:14 -06:00
8ffe5fd408 update renovate config 2024-03-13 06:05:23 -06:00
773ff53432 update postgresql cluster 2024-03-13 06:00:36 -06:00
81ab282822 update outline 2024-03-13 06:00:19 -06:00
8fb910383d update kubelet-serving-cert-approver 2024-03-13 05:54:53 -06:00
1fe5c07c36 update homepage 2024-03-13 05:49:35 -06:00
eadbf37ce5 update home-assistant 2024-03-13 05:46:15 -06:00
f7f210a905 update calibre-server 2024-03-13 05:40:30 -06:00
a6d3eaf404 add outline 2024-03-13 04:43:14 -06:00
72f5ebc567 increase chart version 2024-03-13 04:38:00 -06:00
e52c5dc8c8 add readme 2024-03-13 01:14:31 -06:00
f08ae85e5c fix indentation 2024-03-13 01:11:57 -06:00
4988c82be2 fix chart data 2024-03-13 01:08:29 -06:00
f4c15191dc add libation 2024-03-13 01:07:01 -06:00
678ce1aec5 split recovery and backup values 2024-03-12 23:23:34 -06:00
99e958bd6f change default tag 2024-03-11 22:52:50 -06:00
879ca58606 change env value 2024-03-11 22:49:55 -06:00
f9df889a0a add kubelet-serving-cert-approver chart 2024-03-11 22:36:23 -06:00
28c909317d Merge pull request #9 from alexlebens/renovate/linuxserver-code-server-4.x
Update linuxserver/code-server Docker tag to v4.22.0
2024-03-07 11:25:02 -07:00
97e58e4113 bump chart version 2024-03-07 18:24:28 +00:00
renovate[bot]
796b9e6865 Update linuxserver/code-server Docker tag to v4.22.0 2024-03-07 04:43:50 +00:00
541cc18889 add code server 2024-03-06 21:43:16 -07:00
64986858b1 remove label 2024-03-06 15:24:55 -07:00
7dfb883a8f Merge pull request #8 from alexlebens/renovate/azure-setup-helm-4.x
Update azure/setup-helm action to v4
2024-02-28 20:14:14 -03:00
renovate[bot]
9abc2a1f98 Update azure/setup-helm action to v4 2024-02-28 22:31:54 +00:00
8b615f4780 increase chart version 2024-02-28 12:41:02 -03:00
1f3a4d3042 rename values 2024-02-28 12:38:17 -03:00
7c4601835c Merge pull request #7 from alexlebens/renovate/homeassistant-home-assistant-2024.x
Update homeassistant/home-assistant Docker tag to v2024.2.5
2024-02-28 12:35:40 -03:00
renovate[bot]
401871daa1 Update homeassistant/home-assistant Docker tag to v2024.2.5 2024-02-28 15:33:26 +00:00
b53ba2b073 remove kind testing 2024-02-28 12:24:22 -03:00
3191e4ed53 revert to prior change 2024-02-28 12:14:34 -03:00
72ea1faa67 test method to update chart.yaml 2024-02-28 12:11:08 -03:00
751a1d4143 move bumpVersion to rule 2024-02-28 11:57:39 -03:00
81bd94a1db split imageName into imageRepo and imageTag 2024-02-28 11:42:33 -03:00
e49b1482a1 update renovate configuration 2024-02-28 11:42:12 -03:00
ba4273041d match renovate config with net-infra 2024-02-27 17:33:05 -03:00
d45a5f6084 remove core label 2024-02-27 17:17:09 -03:00
e3627d3531 change renovate config 2024-02-27 16:55:42 -03:00
f12bb5a879 increase app version 2024-02-26 21:46:12 -03:00
f4c2938d95 move renovate file location 2024-02-26 16:54:38 -03:00
7a8c6e7b3c bump chart version 2024-02-23 21:12:32 -03:00
c0ca3a909c increase app version 2024-02-23 21:05:57 -03:00
792e4c018c increase app version 2024-02-23 17:13:44 -03:00
e51e4e34dc add config and books volumes 2024-02-22 23:02:38 -03:00
e429bc51f7 remove provisioned config 2024-02-22 22:01:57 -03:00
6adb00b442 add default value for claim name 2024-02-22 17:57:12 -03:00
9a5bc849bc fix recovery naming 2024-02-22 17:49:00 -03:00
9ef96af4a5 add calibre server 2024-02-22 16:47:51 -03:00
66a5099f75 update home assistant version 2024-02-21 09:47:51 -03:00
f2e1dabf24 Merge pull request #6 from alexlebens/renovate/helm-kind-action-1.x
Update helm/kind-action action to v1.9.0
2024-02-15 11:51:00 -07:00
39b46177ea Merge pull request #5 from alexlebens/renovate/actions-setup-python-5.x
Update actions/setup-python action to v5
2024-02-15 11:50:54 -07:00
c69d61a07d Merge pull request #4 from alexlebens/renovate/actions-checkout-4.x
Update actions/checkout action to v4
2024-02-15 11:50:48 -07:00
1236a200cd update app version 2024-02-15 11:47:38 -07:00
renovate[bot]
24845fb336 Update helm/kind-action action to v1.9.0 2024-02-15 18:47:16 +00:00
renovate[bot]
a398abdf63 Update actions/setup-python action to v5 2024-02-15 18:46:51 +00:00
renovate[bot]
5bbd6db883 Update actions/checkout action to v4 2024-02-15 18:46:47 +00:00
9e2d2a7503 Merge pull request #2 from alexlebens/renovate/helm-chart-testing-action-2.x
Update helm/chart-testing-action action to v2.6.1
2024-02-15 11:46:28 -07:00
renovate[bot]
ea662406ed Update helm/chart-testing-action action to v2.6.1 2024-02-15 18:44:47 +00:00
06661efd7e update renovate config 2024-02-15 11:44:21 -07:00
ec95fd84f9 remove schedule 2024-02-15 11:30:46 -07:00
86d7e9f156 update renovate config 2024-02-15 11:23:07 -07:00
5a3cb20dcb fix typo in prometheus rule 2024-02-13 07:34:20 -07:00
1cb675e7c3 fix service name 2024-02-13 07:06:52 -07:00
438ceef98b enable switch code server in deployment 2024-02-13 06:57:51 -07:00
0be01806dd fix middleware 2024-02-13 06:41:13 -07:00
639f7a4031 change to use ingress routes 2024-02-13 06:39:52 -07:00
ba3e6551e2 fix ingress class name 2024-02-13 06:15:56 -07:00
d12db5479a fix typo 2024-02-13 06:06:50 -07:00
e44c961258 add home-assistant 2024-02-13 06:01:07 -07:00
0999f6272f fix image repo 2024-02-12 21:05:07 -07:00
7bfb8f5920 bump default resources 2024-02-12 20:55:47 -07:00
eb79c0ba68 add image name value 2024-02-12 20:06:10 -07:00
466b67581f raise default memory request 2024-02-12 19:33:50 -07:00
031b1dec3a fix linting 2024-02-10 11:34:47 -07:00
41282e79e8 add postgres-cluster 2024-02-10 11:12:38 -07:00
ffcaf51b66 add envFrom 2024-02-10 04:08:58 -07:00
30d69f695c change readme 2024-02-10 03:32:06 -07:00
c5feb14abc change names 2024-02-10 03:17:12 -07:00
5665d7a99f add lint 2024-02-10 03:13:00 -07:00
5158f9f66c update release workflow 2024-02-10 03:11:14 -07:00
e9bed237bf add example workflow 2024-02-10 03:00:58 -07:00
69 changed files with 3017 additions and 385 deletions

View File

@@ -0,0 +1,39 @@
name: lint-and-test
on:
pull_request:
jobs:
lint-test:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Helm
uses: azure/setup-helm@v4
with:
version: latest
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.13"
check-latest: true
- name: Set up Chart Testing
uses: helm/chart-testing-action@v2.7.0
- name: Run Chart Testing (list-changed)
id: list-changed
run: |
changed=$(ct list-changed --target-branch ${{ gitea.event.repository.default_branch }})
if [[ -n "$changed" ]]; then
echo "changed=true" >> $GITHUB_OUTPUT
fi
- name: Run Chart Testing (lint)
if: steps.list-changed.outputs.changed == 'true'
run: ct lint --validate-maintainers=false --target-branch ${{ gitea.event.repository.default_branch }}

View File

@@ -0,0 +1,40 @@
name: process-repository
on:
schedule:
- cron: '@daily'
workflow_dispatch:
jobs:
process-repository:
runs-on: ubuntu-latest
steps:
- name: Checkout Python Script
uses: actions/checkout@v4
with:
repository: alexlebens/workflow-scripts
ref: main
token: ${{ secrets.BOT_TOKEN }}
path: workflow-scripts
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.13'
- name: Install dependencies
run: pip install requests immutabledict
- name: Run Script
env:
INSTANCE_URL: ${{ vars.INSTANCE_URL }}
REPOSITORY: ${{ gitea.repository }}
TOKEN: ${{ secrets.BOT_TOKEN }}
LOG_LEVEL: DEBUG
ISSUE_STALE_DAYS: 3
ISSUE_STALE_TAG: stale
ISSUE_EXCLUDE_TAG: Renovate
PULL_REQUEST_STALE_DAYS: 3
PULL_REQUEST_STALE_TAG: stale
run: python ./workflow-scripts/process-repository.py

View File

@@ -0,0 +1,85 @@
name: release-charts-cloudflared
on:
push:
branches:
- main
paths:
- "charts/cloudflared/**"
workflow_dispatch:
env:
WORKFLOW_DIR: "charts/cloudflared"
jobs:
release:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up Helm
uses: azure/setup-helm@v4
with:
token: ${{ secrets.GITEA_TOKEN }}
version: latest
- name: Package Helm Chart
run: |
cd $WORKFLOW_DIR
helm dependency build
echo "PACKAGE_PATH=$(helm package . | awk '{print $NF}')" >> $GITEA_ENV
- name: Publish Helm Chart to Harbor
run: |
helm registry login ${{ vars.REGISTRY_HOST }} -u ${{ vars.REGISTRY_USER }} -p ${{ secrets.REGISTRY_SECRET }}
helm push ${{ env.PACKAGE_PATH }} oci://${{ vars.REGISTRY_HOST }}/helm-charts
- name: Publish Helm Chart to Gitea
run: |
helm plugin install https://github.com/chartmuseum/helm-push
helm repo add --username ${{ gitea.actor }} --password ${{ secrets.REPOSITORY_TOKEN }} helm-charts https://${{ vars.REPOSITORY_HOST }}/api/packages/alexlebens/helm
helm cm-push ${{ env.PACKAGE_PATH }} helm-charts
- name: Extract Chart Metadata
run: |
cd $WORKFLOW_DIR
echo "CHART_VERSION=$(yq '.version' Chart.yaml)" >> $GITEA_ENV
echo "CHART_NAME=$(yq '.name' Chart.yaml)" >> $GITEA_ENV
- name: Release Helm Chart
uses: akkuman/gitea-release-action@v1
with:
name: ${{ env.CHART_NAME }}-${{ env.CHART_VERSION }}
tag_name: ${{ env.CHART_NAME }}-${{ env.CHART_VERSION }}
files: |-
${{ env.PACKAGE_PATH }}
- name: ntfy Success
uses: niniyas/ntfy-action@master
if: success()
with:
url: '${{ secrets.NTFY_URL }}'
topic: '${{ secrets.NTFY_TOPIC }}'
title: 'Gitea Action'
priority: 3
headers: '{"Authorization": "Bearer ${{ secrets.NTFY_CRED }}"}'
tags: action,successfully,completed
details: 'Helm Chart for cloudflared release workflow has successfully completed!'
icon: 'https://cdn.jsdelivr.net/gh/selfhst/icons/png/gitea.png'
- name: ntfy Failed
uses: niniyas/ntfy-action@master
if: failure()
with:
url: '${{ secrets.NTFY_URL }}'
topic: '${{ secrets.NTFY_TOPIC }}'
title: 'Gitea Action'
priority: 4
headers: '{"Authorization": "Bearer ${{ secrets.NTFY_CRED }}"}'
tags: action,failed
details: 'Helm Chart for cloudflared release workflow has failed!'
icon: 'https://cdn.jsdelivr.net/gh/selfhst/icons/png/gitea.png'
actions: '[{"action": "view", "label": "Open Gitea", "url": "https://gitea.alexlebens.dev/alexlebens/site-profile/actions?workflow=release-image.yml", "clear": true}]'
image: true

View File

@@ -0,0 +1,85 @@
name: release-charts-generic-device-plugin
on:
push:
branches:
- main
paths:
- "charts/generic-device-plugin/**"
workflow_dispatch:
env:
WORKFLOW_DIR: "charts/generic-device-plugin"
jobs:
release:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up Helm
uses: azure/setup-helm@v4
with:
token: ${{ secrets.GITEA_TOKEN }}
version: latest
- name: Package Helm Chart
run: |
cd $WORKFLOW_DIR
helm dependency build
echo "PACKAGE_PATH=$(helm package . | awk '{print $NF}')" >> $GITEA_ENV
- name: Publish Helm Chart to Harbor
run: |
helm registry login ${{ vars.REGISTRY_HOST }} -u ${{ vars.REGISTRY_USER }} -p ${{ secrets.REGISTRY_SECRET }}
helm push ${{ env.PACKAGE_PATH }} oci://${{ vars.REGISTRY_HOST }}/helm-charts
- name: Publish Helm Chart to Gitea
run: |
helm plugin install https://github.com/chartmuseum/helm-push
helm repo add --username ${{ gitea.actor }} --password ${{ secrets.REPOSITORY_TOKEN }} helm-charts https://${{ vars.REPOSITORY_HOST }}/api/packages/alexlebens/helm
helm cm-push ${{ env.PACKAGE_PATH }} helm-charts
- name: Extract Chart Metadata
run: |
cd $WORKFLOW_DIR
echo "CHART_VERSION=$(yq '.version' Chart.yaml)" >> $GITEA_ENV
echo "CHART_NAME=$(yq '.name' Chart.yaml)" >> $GITEA_ENV
- name: Release Helm Chart
uses: akkuman/gitea-release-action@v1
with:
name: ${{ env.CHART_NAME }}-${{ env.CHART_VERSION }}
tag_name: ${{ env.CHART_NAME }}-${{ env.CHART_VERSION }}
files: |-
${{ env.PACKAGE_PATH }}
- name: ntfy Success
uses: niniyas/ntfy-action@master
if: success()
with:
url: '${{ secrets.NTFY_URL }}'
topic: '${{ secrets.NTFY_TOPIC }}'
title: 'Gitea Action'
priority: 3
headers: '{"Authorization": "Bearer ${{ secrets.NTFY_CRED }}"}'
tags: action,successfully,completed
details: 'Helm Chart for generic-device-plugin release workflow has successfully completed!'
icon: 'https://cdn.jsdelivr.net/gh/selfhst/icons/png/gitea.png'
- name: ntfy Failed
uses: niniyas/ntfy-action@master
if: failure()
with:
url: '${{ secrets.NTFY_URL }}'
topic: '${{ secrets.NTFY_TOPIC }}'
title: 'Gitea Action'
priority: 4
headers: '{"Authorization": "Bearer ${{ secrets.NTFY_CRED }}"}'
tags: action,failed
details: 'Helm Chart for generic-device-plugin release workflow has failed!'
icon: 'https://cdn.jsdelivr.net/gh/selfhst/icons/png/gitea.png'
actions: '[{"action": "view", "label": "Open Gitea", "url": "https://gitea.alexlebens.dev/alexlebens/site-profile/actions?workflow=release-image.yml", "clear": true}]'
image: true

View File

@@ -0,0 +1,85 @@
name: release-charts-gitea-actions
on:
push:
branches:
- main
paths:
- "charts/gitea-actions/**"
workflow_dispatch:
env:
WORKFLOW_DIR: "charts/gitea-actions"
jobs:
release:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up Helm
uses: azure/setup-helm@v4
with:
token: ${{ secrets.GITEA_TOKEN }}
version: latest
- name: Package Helm Chart
run: |
cd $WORKFLOW_DIR
helm dependency build
echo "PACKAGE_PATH=$(helm package . | awk '{print $NF}')" >> $GITEA_ENV
- name: Publish Helm Chart to Harbor
run: |
helm registry login ${{ vars.REGISTRY_HOST }} -u ${{ vars.REGISTRY_USER }} -p ${{ secrets.REGISTRY_SECRET }}
helm push ${{ env.PACKAGE_PATH }} oci://${{ vars.REGISTRY_HOST }}/helm-charts
- name: Publish Helm Chart to Gitea
run: |
helm plugin install https://github.com/chartmuseum/helm-push
helm repo add --username ${{ gitea.actor }} --password ${{ secrets.REPOSITORY_TOKEN }} helm-charts https://${{ vars.REPOSITORY_HOST }}/api/packages/alexlebens/helm
helm cm-push ${{ env.PACKAGE_PATH }} helm-charts
- name: Extract Chart Metadata
run: |
cd $WORKFLOW_DIR
echo "CHART_VERSION=$(yq '.version' Chart.yaml)" >> $GITEA_ENV
echo "CHART_NAME=$(yq '.name' Chart.yaml)" >> $GITEA_ENV
- name: Release Helm Chart
uses: akkuman/gitea-release-action@v1
with:
name: ${{ env.CHART_NAME }}-${{ env.CHART_VERSION }}
tag_name: ${{ env.CHART_NAME }}-${{ env.CHART_VERSION }}
files: |-
${{ env.PACKAGE_PATH }}
- name: ntfy Success
uses: niniyas/ntfy-action@master
if: success()
with:
url: '${{ secrets.NTFY_URL }}'
topic: '${{ secrets.NTFY_TOPIC }}'
title: 'Gitea Action'
priority: 3
headers: '{"Authorization": "Bearer ${{ secrets.NTFY_CRED }}"}'
tags: action,successfully,completed
details: 'Helm Chart for gitea-actions release workflow has successfully completed!'
icon: 'https://cdn.jsdelivr.net/gh/selfhst/icons/png/gitea.png'
- name: ntfy Failed
uses: niniyas/ntfy-action@master
if: failure()
with:
url: '${{ secrets.NTFY_URL }}'
topic: '${{ secrets.NTFY_TOPIC }}'
title: 'Gitea Action'
priority: 4
headers: '{"Authorization": "Bearer ${{ secrets.NTFY_CRED }}"}'
tags: action,failed
details: 'Helm Chart for gitea-actions release workflow has failed!'
icon: 'https://cdn.jsdelivr.net/gh/selfhst/icons/png/gitea.png'
actions: '[{"action": "view", "label": "Open Gitea", "url": "https://gitea.alexlebens.dev/alexlebens/site-profile/actions?workflow=release-image.yml", "clear": true}]'
image: true

View File

@@ -0,0 +1,85 @@
name: release-charts-postgres-cluster
on:
push:
branches:
- main
paths:
- "charts/postgres-cluster/**"
workflow_dispatch:
env:
WORKFLOW_DIR: "charts/postgres-cluster"
jobs:
release:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up Helm
uses: azure/setup-helm@v4
with:
token: ${{ secrets.GITEA_TOKEN }}
version: latest
- name: Package Helm Chart
run: |
cd $WORKFLOW_DIR
helm dependency build
echo "PACKAGE_PATH=$(helm package . | awk '{print $NF}')" >> $GITEA_ENV
- name: Publish Helm Chart to Harbor
run: |
helm registry login ${{ vars.REGISTRY_HOST }} -u ${{ vars.REGISTRY_USER }} -p ${{ secrets.REGISTRY_SECRET }}
helm push ${{ env.PACKAGE_PATH }} oci://${{ vars.REGISTRY_HOST }}/helm-charts
- name: Publish Helm Chart to Gitea
run: |
helm plugin install https://github.com/chartmuseum/helm-push
helm repo add --username ${{ gitea.actor }} --password ${{ secrets.REPOSITORY_TOKEN }} helm-charts https://${{ vars.REPOSITORY_HOST }}/api/packages/alexlebens/helm
helm cm-push ${{ env.PACKAGE_PATH }} helm-charts
- name: Extract Chart Metadata
run: |
cd $WORKFLOW_DIR
echo "CHART_VERSION=$(yq '.version' Chart.yaml)" >> $GITEA_ENV
echo "CHART_NAME=$(yq '.name' Chart.yaml)" >> $GITEA_ENV
- name: Release Helm Chart
uses: akkuman/gitea-release-action@v1
with:
name: ${{ env.CHART_NAME }}-${{ env.CHART_VERSION }}
tag_name: ${{ env.CHART_NAME }}-${{ env.CHART_VERSION }}
files: |-
${{ env.PACKAGE_PATH }}
- name: ntfy Success
uses: niniyas/ntfy-action@master
if: success()
with:
url: '${{ secrets.NTFY_URL }}'
topic: '${{ secrets.NTFY_TOPIC }}'
title: 'Gitea Action'
priority: 3
headers: '{"Authorization": "Bearer ${{ secrets.NTFY_CRED }}"}'
tags: action,successfully,completed
details: 'Helm Chart for postgres-cluster release workflow has successfully completed!'
icon: 'https://cdn.jsdelivr.net/gh/selfhst/icons/png/gitea.png'
- name: ntfy Failed
uses: niniyas/ntfy-action@master
if: failure()
with:
url: '${{ secrets.NTFY_URL }}'
topic: '${{ secrets.NTFY_TOPIC }}'
title: 'Gitea Action'
priority: 4
headers: '{"Authorization": "Bearer ${{ secrets.NTFY_CRED }}"}'
tags: action,failed
details: 'Helm Chart for postgres-cluster release workflow has failed!'
icon: 'https://cdn.jsdelivr.net/gh/selfhst/icons/png/gitea.png'
actions: '[{"action": "view", "label": "Open Gitea", "url": "https://gitea.alexlebens.dev/alexlebens/site-profile/actions?workflow=release-image.yml", "clear": true}]'
image: true

View File

@@ -0,0 +1,32 @@
name: renovate
on:
schedule:
- cron: "@daily"
push:
branches:
- main
workflow_dispatch:
jobs:
renovate:
runs-on: ubuntu-latest
container: ghcr.io/renovatebot/renovate:41
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Renovate
run: renovate
env:
RENOVATE_PLATFORM: gitea
RENOVATE_ENDPOINT: ${{ vars.INSTANCE_URL }}
RENOVATE_REPOSITORIES: alexlebens/helm-charts
RENOVATE_GIT_AUTHOR: Renovate Bot <renovate-bot@alexlebens.net>
LOG_LEVEL: info
RENOVATE_TOKEN: ${{ secrets.RENOVATE_TOKEN }}
RENOVATE_GIT_PRIVATE_KEY: ${{ secrets.RENOVATE_GIT_PRIVATE_KEY }}
RENOVATE_GITHUB_COM_TOKEN: ${{ secrets.RENOVATE_GITHUB_COM_TOKEN }}
RENOVATE_REDIS_URL: ${{ vars.RENOVATE_REDIS_URL }}

29
.github/workflows/release-charts.yml vendored Normal file
View File

@@ -0,0 +1,29 @@
name: release-charts
on:
push:
branches:
- main
paths:
- "charts/**"
jobs:
release:
permissions:
contents: write
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Configure Git
run: |
git config user.name "$GITHUB_ACTOR"
git config user.email "$GITHUB_ACTOR@users.noreply.github.com"
- name: Run chart-releaser
uses: helm/chart-releaser-action@v1.7.0
env:
CR_TOKEN: "${{ secrets.GITHUB_TOKEN }}"

5
.gitignore vendored
View File

@@ -1,3 +1,6 @@
# Archived
charts/**/archive
# Compiled Helm chart dependencies
charts/**/Chart.lock
charts/**/charts/
@@ -6,4 +9,4 @@ charts/**/charts/
__snapshot__/
# Docs
_site/
_site/

19
.pre-commit-config.yaml Normal file
View File

@@ -0,0 +1,19 @@
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v2.3.0
hooks:
- id: end-of-file-fixer
- id: trailing-whitespace
- id: check-added-large-files
- id: check-yaml
exclude: 'charts/'
args:
- --multi
- repo: https://github.com/norwoodj/helm-docs
rev: v1.14.2
hooks:
- id: helm-docs
args:
- --chart-search-root=charts
- --template-files=./_templates.gotmpl
- --template-files=README.md.gotmpl

View File

@@ -1,9 +1,21 @@
# Welcome
Welcome to the documentation for the [alexlebens's Helm charts](https://github.com/alexlebens/helm-charts).
## Getting started
[Helm](https://helm.sh) must be installed to use the charts in this repository.
Refer to Helm's [documentation](https://helm.sh/docs/) to get started.
## Installation
```sh
helm repo add alexlebens http://alexlebens.github.io/helm-charts/
```
You can then run `helm search repo alexlebens` to search the charts.
## License
This project is licensed under the terms of the Apache 2.0 License license.
This project is licensed under the terms of the Apache 2.0 License license.

View File

@@ -0,0 +1,18 @@
apiVersion: v2
name: cloudflared
version: 1.18.0
description: Cloudflared Tunnel
keywords:
- cloudflare
- tunnel
sources:
- https://github.com/cloudflare/cloudflared
- https://github.com/bjw-s-labs/helm-charts/tree/main/charts/library/common
maintainers:
- name: alexlebens
dependencies:
- name: common
repository: https://bjw-s-labs.github.io/helm-charts/
version: 4.1.2
icon: https://avatars.githubusercontent.com/u/314135?s=48&v=4
appVersion: "2025.6.0"

View File

@@ -0,0 +1,35 @@
# cloudflared
![Version: 1.17.3](https://img.shields.io/badge/Version-1.17.3-informational?style=flat-square) ![AppVersion: 2025.6.0](https://img.shields.io/badge/AppVersion-2025.6.0-informational?style=flat-square)
Cloudflared Tunnel
## Maintainers
| Name | Email | Url |
| ---- | ------ | --- |
| alexlebens | | |
## Source Code
* <https://github.com/cloudflare/cloudflared>
* <https://github.com/bjw-s-labs/helm-charts/tree/main/charts/library/common>
## Requirements
| Repository | Name | Version |
|------------|------|---------|
| https://bjw-s-labs.github.io/helm-charts/ | common | 4.1.2 |
## Values
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| existingSecretKey | string | `"cf-tunnel-token"` | Name of key that contains the token in the existingSecret |
| existingSecretName | string | `"cloudflared-secret"` | Name of existing secret that contains Cloudflare token |
| image | object | `{"pullPolicy":"IfNotPresent","repository":"cloudflare/cloudflared","tag":"2025.6.1"}` | Default image |
| name | string | `"cloudflared"` | Name override of release |
| resources | object | `{"requests":{"cpu":"10m","memory":"128Mi"}}` | Default resources |
----------------------------------------------
Autogenerated from chart metadata using [helm-docs v1.14.2](https://github.com/norwoodj/helm-docs/releases/v1.14.2)

View File

@@ -0,0 +1,41 @@
{{- include "bjw-s.common.loader.init" . }}
{{- define "cloudflared.hardcodedValues" -}}
{{ if not .Values.global.nameOverride }}
global:
nameOverride: {{ .Values.name }}
{{ end }}
controllers:
main:
type: deployment
strategy: Recreate
containers:
main:
image:
repository: {{ .Values.image.repository }}
tag: {{ .Values.image.tag }}
pullPolicy: {{ .Values.image.pullPolicy }}
args:
- tunnel
- --protocol
- http2
- --no-autoupdate
- run
- --token
- $(CF_MANAGED_TUNNEL_TOKEN)
env:
- name: CF_MANAGED_TUNNEL_TOKEN
valueFrom:
secretKeyRef:
name: {{ .Values.existingSecretName }}
key: {{ .Values.existingSecretKey }}
resources:
{{- with .Values.resources }}
resources:
{{- toYaml . | nindent 10 }}
{{ end }}
{{- end -}}
{{- $_ := mergeOverwrite .Values (include "cloudflared.hardcodedValues" . | fromYaml) -}}
{{/* Render the templates */}}
{{ include "bjw-s.common.loader.generate" . }}

View File

@@ -0,0 +1,20 @@
# -- Name override of release
name: cloudflared
# -- Name of existing secret that contains Cloudflare token
existingSecretName: cloudflared-secret
# -- Name of key that contains the token in the existingSecret
existingSecretKey: cf-tunnel-token
# -- Default image
image:
repository: cloudflare/cloudflared
tag: "2025.7.0"
pullPolicy: IfNotPresent
# -- Default resources
resources:
requests:
cpu: 10m
memory: 128Mi

View File

@@ -0,0 +1,18 @@
apiVersion: v2
name: generic-device-plugin
version: 0.4.0
description: Generic Device Plugin
keywords:
- generic-device-plugin
- device
- plugin
sources:
- https://github.com/squat/generic-device-plugin
- https://github.com/bjw-s/helm-charts/tree/main/charts/library/common
maintainers:
- name: alexlebens
dependencies:
- name: common
repository: https://bjw-s-labs.github.io/helm-charts/
version: 4.1.2
appVersion: 0.2.0

View File

@@ -0,0 +1,37 @@
# generic-device-plugin
![Version: 0.3.2](https://img.shields.io/badge/Version-0.3.2-informational?style=flat-square) ![AppVersion: 0.2.0](https://img.shields.io/badge/AppVersion-0.2.0-informational?style=flat-square)
Generic Device Plugin
## Maintainers
| Name | Email | Url |
| ---- | ------ | --- |
| alexlebens | | |
## Source Code
* <https://github.com/squat/generic-device-plugin>
* <https://github.com/bjw-s/helm-charts/tree/main/charts/library/common>
## Requirements
| Repository | Name | Version |
|------------|------|---------|
| https://bjw-s-labs.github.io/helm-charts/ | common | 4.1.2 |
## Values
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| config | object | `{"data":"devices:\n - name: serial\n groups:\n - paths:\n - path: /dev/ttyUSB*\n - paths:\n - path: /dev/ttyACM*\n - paths:\n - path: /dev/tty.usb*\n - paths:\n - path: /dev/cu.*\n - paths:\n - path: /dev/cuaU*\n - paths:\n - path: /dev/rfcomm*\n - name: video\n groups:\n - paths:\n - path: /dev/video0\n - name: fuse\n groups:\n - count: 10\n paths:\n - path: /dev/fuse\n - name: audio\n groups:\n - count: 10\n paths:\n - path: /dev/snd\n - name: capture\n groups:\n - paths:\n - path: /dev/snd/controlC0\n - path: /dev/snd/pcmC0D0c\n - paths:\n - path: /dev/snd/controlC1\n mountPath: /dev/snd/controlC0\n - path: /dev/snd/pcmC1D0c\n mountPath: /dev/snd/pcmC0D0c\n - paths:\n - path: /dev/snd/controlC2\n mountPath: /dev/snd/controlC0\n - path: /dev/snd/pcmC2D0c\n mountPath: /dev/snd/pcmC0D0c\n - paths:\n - path: /dev/snd/controlC3\n mountPath: /dev/snd/controlC0\n - path: /dev/snd/pcmC3D0c\n mountPath: /dev/snd/pcmC0D0c\n","enabled":true}` | Config map |
| config.data | string | See [values.yaml](./values.yaml) | generic-device-plugin config file [[ref]](https://github.com/squat/generic-device-plugin#usage) |
| deviceDomain | string | `"squat.ai"` | Domain used by devices for identifcation |
| image | object | `{"pullPolicy":"Always","repository":"ghcr.io/squat/generic-device-plugin","tag":"latest@sha256:d7d0951df7f11479185fd9fba1c1cb4d9c8f3232d38a5468d6fe80074f2b45d5"}` | Default image |
| name | string | `"generic-device-plugin"` | Name override of release |
| resources | object | `{"limit":{"cpu":"100m","memory":"20Mi"},"requests":{"cpu":"50m","memory":"10Mi"}}` | Default resources |
| service | object | `{"listenPort":8080}` | Service port |
----------------------------------------------
Autogenerated from chart metadata using [helm-docs v1.14.2](https://github.com/norwoodj/helm-docs/releases/v1.14.2)

View File

@@ -0,0 +1,82 @@
{{ include "bjw-s.common.loader.init" . }}
{{ define "genericDevicePlugin.hardcodedValues" }}
{{ if not .Values.global.nameOverride }}
global:
nameOverride: {{ .Values.name }}
{{ end }}
controllers:
main:
type: daemonset
pod:
priorityClassName: system-node-critical
tolerations:
- operator: "Exists"
effect: "NoExecute"
- operator: "Exists"
effect: "NoSchedule"
containers:
main:
image:
repository: {{ .Values.image.repository }}
tag: {{ .Values.image.tag }}
pullPolicy: {{ .Values.image.pullPolicy }}
args:
- --config=/config/config.yaml
env:
- name: LISTEN
value: :{{ .Values.service.listenPort }}
- name: PLUGIN_DIRECTORY
value: /var/lib/kubelet/device-plugins
- name: DOMAIN
value: {{ .Values.deviceDomain }}
probes:
liveness:
type: HTTP
path: /health
readiness:
type: HTTP
path: /health
startup:
type: HTTP
path: /health
securityContext:
privileged: True
configMaps:
config:
enabled: {{ .Values.config.enabled }}
data:
config.yaml: {{ toYaml .Values.config.data | nindent 8 }}
service:
main:
controller: main
ports:
http:
port: {{ .Values.service.listenPort }}
persistence:
config:
enabled: true
type: configMap
name: {{ .Values.name }}-config
device-plugins:
enabled: true
type: hostPath
hostPath: /var/lib/kubelet/device-plugins
dev:
enabled: true
type: hostPath
hostPath: /dev
serviceMonitor:
main:
serviceName: generic-device-plugin
endpoints:
- port: http
scheme: http
path: /metrics
interval: 30s
scrapeTimeout: 10s
{{ end }}
{{ $_ := mergeOverwrite .Values (include "genericDevicePlugin.hardcodedValues" . | fromYaml) }}
{{/* Render the templates */}}
{{ include "bjw-s.common.loader.generate" . }}

View File

@@ -0,0 +1,80 @@
# -- Name override of release
name: generic-device-plugin
# -- Default image
image:
repository: ghcr.io/squat/generic-device-plugin
tag: latest@sha256:1f779444c72c7bf06b082c44698d6268a8e642ebd9488a35c84a603087940e64
pullPolicy: Always
# -- Domain used by devices for identifcation
deviceDomain: squat.ai
# -- Service port
service:
listenPort: 8080
# -- Default resources
resources:
limit:
cpu: 100m
memory: 20Mi
requests:
cpu: 50m
memory: 10Mi
# -- Config map
config:
enabled: true
# -- generic-device-plugin config file [[ref]](https://github.com/squat/generic-device-plugin#usage)
# @default -- See [values.yaml](./values.yaml)
data: |
devices:
- name: serial
groups:
- paths:
- path: /dev/ttyUSB*
- paths:
- path: /dev/ttyACM*
- paths:
- path: /dev/tty.usb*
- paths:
- path: /dev/cu.*
- paths:
- path: /dev/cuaU*
- paths:
- path: /dev/rfcomm*
- name: video
groups:
- paths:
- path: /dev/video0
- name: fuse
groups:
- count: 10
paths:
- path: /dev/fuse
- name: audio
groups:
- count: 10
paths:
- path: /dev/snd
- name: capture
groups:
- paths:
- path: /dev/snd/controlC0
- path: /dev/snd/pcmC0D0c
- paths:
- path: /dev/snd/controlC1
mountPath: /dev/snd/controlC0
- path: /dev/snd/pcmC1D0c
mountPath: /dev/snd/pcmC0D0c
- paths:
- path: /dev/snd/controlC2
mountPath: /dev/snd/controlC0
- path: /dev/snd/pcmC2D0c
mountPath: /dev/snd/pcmC0D0c
- paths:
- path: /dev/snd/controlC3
mountPath: /dev/snd/controlC0
- path: /dev/snd/pcmC3D0c
mountPath: /dev/snd/pcmC0D0c

View File

@@ -0,0 +1,15 @@
apiVersion: v2
name: gitea-actions
version: 0.2.1
description: Gitea Actions
keywords:
- cicd
- runner
- actions
sources:
- https://gitea.com/gitea/helm-actions
- https://gitea.com/gitea/act
maintainers:
- name: alexlebens
icon: https://avatars.githubusercontent.com/u/100373852?s=48&v=4
appVersion: 0.2.11

View File

@@ -0,0 +1,18 @@
MIT License
Copyright (c) 2025 gitea
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and
associated documentation files (the "Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the
following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial
portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT
LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO
EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
USE OR OTHER DEALINGS IN THE SOFTWARE.

View File

@@ -0,0 +1,54 @@
# gitea-actions
![Version: 0.2.1](https://img.shields.io/badge/Version-0.2.1-informational?style=flat-square) ![AppVersion: 0.2.11](https://img.shields.io/badge/AppVersion-0.2.11-informational?style=flat-square)
Gitea Actions
## Maintainers
| Name | Email | Url |
| ---- | ------ | --- |
| alexlebens | | |
## Source Code
* <https://gitea.com/gitea/helm-actions>
* <https://gitea.com/gitea/act>
## Values
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| enabled | bool | `true` | |
| existingSecret | string | `""` | |
| existingSecretKey | string | `""` | |
| giteaRootURL | string | `""` | |
| global.fullnameOverride | string | `""` | |
| global.imageRegistry | string | `""` | |
| global.nameOverride | string | `""` | |
| global.storageClass | string | `""` | |
| init.image.repository | string | `"busybox"` | |
| init.image.tag | string | `"1.37.0"` | |
| statefulset.actRunner.config | string | `"log:\n level: debug\ncache:\n enabled: false\n"` | |
| statefulset.actRunner.extraVolumeMounts | list | `[]` | |
| statefulset.actRunner.pullPolicy | string | `"IfNotPresent"` | |
| statefulset.actRunner.repository | string | `"gitea/act_runner"` | |
| statefulset.actRunner.tag | string | `"0.2.11"` | |
| statefulset.affinity | object | `{}` | |
| statefulset.annotations | object | `{}` | |
| statefulset.dind.extraEnvs | list | `[]` | |
| statefulset.dind.extraVolumeMounts | list | `[]` | |
| statefulset.dind.pullPolicy | string | `"IfNotPresent"` | |
| statefulset.dind.repository | string | `"docker"` | |
| statefulset.dind.tag | string | `"25.0.2-dind"` | |
| statefulset.extraVolumes | list | `[]` | |
| statefulset.labels | object | `{}` | |
| statefulset.nodeSelector | object | `{}` | |
| statefulset.persistence.size | string | `"1Gi"` | |
| statefulset.persistence.storageClass | string | `""` | |
| statefulset.replicas | int | `1` | |
| statefulset.resources | object | `{}` | |
| statefulset.tolerations | list | `[]` | |
----------------------------------------------
Autogenerated from chart metadata using [helm-docs v1.14.2](https://github.com/norwoodj/helm-docs/releases/v1.14.2)

View File

@@ -0,0 +1,102 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "gitea.actions.name" -}}
{{- default .Chart.Name .Values.global.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "gitea.actions.fullname" -}}
{{- if .Values.global.fullnameOverride -}}
{{- .Values.global.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.global.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "gitea.actions.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Storage Class
*/}}
{{- define "gitea.actions.persistence.storageClass" -}}
{{- $storageClass := (tpl ( default "" .Values.statefulset.persistence.storageClass) .) | default (tpl ( default "" .Values.global.storageClass) .) }}
{{- if $storageClass }}
storageClassName: {{ $storageClass | quote }}
{{- end }}
{{- end -}}
{{/*
Common labels
*/}}
{{- define "gitea.actions.labels" -}}
helm.sh/chart: {{ include "gitea.actions.chart" . }}
app: {{ include "gitea.actions.name" . }}
{{ include "gitea.actions.selectorLabels" . }}
app.kubernetes.io/version: {{ .Values.statefulset.actRunner.tag | default .Chart.AppVersion | quote }}
version: {{ .Values.statefulset.actRunner.tag | default .Chart.AppVersion | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end -}}
{{- define "gitea.actions.labels.actRunner" -}}
helm.sh/chart: {{ include "gitea.actions.chart" . }}
app: {{ include "gitea.actions.name" . }}-act-runner
{{ include "gitea.actions.selectorLabels.actRunner" . }}
app.kubernetes.io/version: {{ .Values.statefulset.actRunner.tag | default .Chart.AppVersion | quote }}
version: {{ .Values.statefulset.actRunner.tag | default .Chart.AppVersion | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end -}}
{{/*
Selector labels
*/}}
{{- define "gitea.actions.selectorLabels" -}}
app.kubernetes.io/name: {{ include "gitea.actions.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end -}}
{{- define "gitea.actions.selectorLabels.actRunner" -}}
app.kubernetes.io/name: {{ include "gitea.actions.name" . }}-act-runner
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end -}}
{{- define "gitea.actions.local_root_url" -}}
{{- .Values.giteaRootURL -}}
{{- end -}}
{{/*
Parse the http url to hostname + port separated by space for the nc command
*/}}
{{- define "gitea.actions.nc" -}}
{{- $url := include "gitea.actions.local_root_url" . | urlParse -}}
{{- $host := get $url "host" -}}
{{- $scheme := get $url "scheme" -}}
{{- $port := "80" -}}
{{- if contains ":" $host -}}
{{- $hostAndPort := regexSplit ":" $host 2 -}}
{{- $host = index $hostAndPort 0 -}}
{{- $port = index $hostAndPort 1 -}}
{{- else if eq $scheme "https" -}}
{{- $port = "443" -}}
{{- else if eq $scheme "http" -}}
{{- $port = "80" -}}
{{- end -}}
{{- printf "%s %s" $host $port -}}
{{- end -}}

View File

@@ -0,0 +1,15 @@
{{- if .Values.enabled }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "gitea.actions.fullname" . }}-act-runner-config
namespace: {{ .Values.namespace | default .Release.Namespace }}
labels:
{{- include "gitea.actions.labels" . | nindent 4 }}
data:
config.yaml: |
{{- with .Values.statefulset.actRunner.config -}}
{{ . | nindent 4}}
{{- end -}}
{{- end }}

View File

@@ -0,0 +1,127 @@
{{- if .Values.enabled }}
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
{{- include "gitea.actions.labels.actRunner" . | nindent 4 }}
{{- with .Values.statefulset.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
annotations:
{{- with .Values.statefulset.annotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
name: {{ include "gitea.actions.fullname" . }}-act-runner
namespace: {{ .Values.namespace | default .Release.Namespace }}
spec:
replicas: {{ .Values.statefulset.replicas }}
selector:
matchLabels:
{{- include "gitea.actions.selectorLabels.actRunner" . | nindent 6 }}
template:
metadata:
labels:
{{- include "gitea.actions.labels.actRunner" . | nindent 8 }}
{{- with .Values.statefulset.labels }}
{{- toYaml . | nindent 8 }}
{{- end }}
spec:
initContainers:
- name: init-gitea
image: "{{ .Values.init.image.repository }}:{{ .Values.init.image.tag }}"
command:
- sh
- -c
- |
while ! nc -z {{ include "gitea.actions.nc" . }}; do
sleep 5
done
containers:
- name: act-runner
image: "{{ .Values.statefulset.actRunner.repository }}:{{ .Values.statefulset.actRunner.tag }}"
imagePullPolicy: {{ .Values.statefulset.actRunner.pullPolicy }}
workingDir: /data
env:
- name: DOCKER_HOST
value: tcp://127.0.0.1:2376
- name: DOCKER_TLS_VERIFY
value: "1"
- name: DOCKER_CERT_PATH
value: /certs/server
- name: GITEA_RUNNER_REGISTRATION_TOKEN
valueFrom:
secretKeyRef:
name: "{{ .Values.existingSecret | default "gitea-actions-token" }}"
key: "{{ .Values.existingSecretKey | default "token" }}"
- name: GITEA_INSTANCE_URL
value: {{ include "gitea.actions.local_root_url" . }}
- name: CONFIG_FILE
value: /actrunner/config.yaml
resources:
{{- toYaml .Values.statefulset.resources | nindent 12 }}
volumeMounts:
- mountPath: /actrunner/config.yaml
name: act-runner-config
subPath: config.yaml
- mountPath: /certs/server
name: docker-certs
- mountPath: /data
name: data-act-runner
{{- with .Values.statefulset.actRunner.extraVolumeMounts }}
{{- toYaml . | nindent 12 }}
{{- end }}
- name: dind
image: "{{ .Values.statefulset.dind.repository }}:{{ .Values.statefulset.dind.tag }}"
imagePullPolicy: {{ .Values.statefulset.dind.pullPolicy }}
env:
- name: DOCKER_HOST
value: tcp://127.0.0.1:2376
- name: DOCKER_TLS_VERIFY
value: "1"
- name: DOCKER_CERT_PATH
value: /certs/server
{{- if .Values.statefulset.dind.extraEnvs }}
{{- toYaml .Values.statefulset.dind.extraEnvs | nindent 12 }}
{{- end }}
securityContext:
privileged: true
resources:
{{- toYaml .Values.statefulset.resources | nindent 12 }}
volumeMounts:
- mountPath: /certs/server
name: docker-certs
{{- with .Values.statefulset.dind.extraVolumeMounts }}
{{- toYaml . | nindent 12 }}
{{- end }}
{{- range $key, $value := .Values.statefulset.nodeSelector }}
nodeSelector:
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- with .Values.statefulset.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.statefulset.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
volumes:
- name: act-runner-config
configMap:
name: {{ include "gitea.actions.fullname" . }}-act-runner-config
- name: docker-certs
emptyDir: {}
{{- with .Values.statefulset.extraVolumes }}
{{- toYaml . | nindent 8 }}
{{- end }}
volumeClaimTemplates:
- metadata:
name: data-act-runner
spec:
accessModes: [ "ReadWriteOnce" ]
{{- include "gitea.actions.persistence.storageClass" . | nindent 8 }}
resources:
requests:
storage: {{ .Values.statefulset.persistence.size }}
{{- end }}

View File

@@ -0,0 +1,102 @@
# Configure Gitea Actions
# - must enable persistence if the job is enabled
## @section Gitea Actions
#
## @param enabled Create an act runner StatefulSet.
## @param init.image.repository The image used for the init containers
## @param init.image.tag The image tag used for the init containers
## @param statefulset.annotations Act runner annotations
## @param statefulset.labels Act runner labels
## @param statefulset.resources Act runner resources
## @param statefulset.nodeSelector NodeSelector for the statefulset
## @param statefulset.tolerations Tolerations for the statefulset
## @param statefulset.affinity Affinity for the statefulset
## @param statefulset.extraVolumes Extra volumes for the statefulset
## @param statefulset.actRunner.repository The Gitea act runner image
## @param statefulset.actRunner.tag The Gitea act runner tag
## @param statefulset.actRunner.pullPolicy The Gitea act runner pullPolicy
## @param statefulset.actRunner.extraVolumeMounts Allows mounting extra volumes in the act runner container
## @param statefulset.actRunner.config [default: Too complex. See values.yaml] Act runner custom configuration. See [Act Runner documentation](https://docs.gitea.com/usage/actions/act-runner#configuration) for details.
## @param statefulset.dind.repository The Docker-in-Docker image
## @param statefulset.dind.tag The Docker-in-Docker image tag
## @param statefulset.dind.pullPolicy The Docker-in-Docker pullPolicy
## @param statefulset.dind.extraVolumeMounts Allows mounting extra volumes in the Docker-in-Docker container
## @param statefulset.dind.extraEnvs Allows adding custom environment variables, such as `DOCKER_IPTABLES_LEGACY`
## @param statefulset.persistence.size Size for persistence to store act runner data
## @param provisioning.enabled Create a job that will create and save the token in a Kubernetes Secret
## @param provisioning.annotations Job's annotations
## @param provisioning.labels Job's labels
## @param provisioning.resources Job's resources
## @param provisioning.nodeSelector NodeSelector for the job
## @param provisioning.tolerations Tolerations for the job
## @param provisioning.affinity Affinity for the job
## @param provisioning.ttlSecondsAfterFinished ttl for the job after finished in order to allow helm to properly recognize that the job completed
## @param provisioning.publish.repository The image that can create the secret via kubectl
## @param provisioning.publish.tag The publish image tag that can create the secret
## @param provisioning.publish.pullPolicy The publish image pullPolicy that can create the secret
## @param existingSecret Secret that contains the token
## @param existingSecretKey Secret key
## @param giteaRootURL URL the act_runner registers and connect with
enabled: true
statefulset:
replicas: 1
annotations: {}
labels: {}
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}
extraVolumes: []
actRunner:
repository: gitea/act_runner
tag: 0.2.11
pullPolicy: IfNotPresent
extraVolumeMounts: []
# See full example here: https://gitea.com/gitea/act_runner/src/branch/main/internal/pkg/config/config.example.yaml
config: |
log:
level: debug
cache:
enabled: false
dind:
repository: docker
tag: 25.0.2-dind
pullPolicy: IfNotPresent
extraVolumeMounts: []
# If the container keeps crashing in your environment, you might have to add the `DOCKER_IPTABLES_LEGACY` environment variable.
# See https://github.com/docker-library/docker/issues/463#issuecomment-1881909456
extraEnvs:
[]
# - name: "DOCKER_IPTABLES_LEGACY"
# value: "1"
persistence:
storageClass: ""
size: 1Gi
init:
image:
repository: busybox
tag: "1.37.0"
## Specify an existing token secret
##
existingSecret: ""
existingSecretKey: ""
## Specify the root URL of the Gitea instance
giteaRootURL: ""
## @section Global
#
## @param global.imageRegistry global image registry override
## @param global.storageClass global storage class override
global:
imageRegistry: ""
storageClass: ""
nameOverride: ""
fullnameOverride: ""

View File

@@ -1,12 +0,0 @@
apiVersion: v2
name: homepage
version: 0.0.1
description: Chart for benphelps homepage
keywords:
- dashboard
sources:
- https://github.com/gethomepage/homepage
maintainers:
- name: alexlebens
icon: https://github.com/benphelps/homepage/blob/de584eae8f12a0d257e554e9511ef19bd2a1232c/public/mstile-150x150.png
appVersion: 0.8.7

View File

@@ -1,18 +0,0 @@
## Introduction
[Homepage](https://github.com/benphelps/homepage)
A modern (fully static, fast), secure (fully proxied), highly customizable application dashboard with integrations for more than 25 services and translations for over 15 languages. Easily configured via YAML files (or discovery via docker labels).
This chart bootstraps a [Homepage](https://github.com/benphelps/homepage) deployment on a [Kubernetes](https://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
## Prerequisites
- Kubernetes
- Helm
- Traefik v2 / IngressRoute
- Authentik / Auth
## Parameters
See the [values files](values.yaml).

View File

@@ -1,20 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: homepage
namespace: {{ .Release.Namespace }}
labels:
app.kubernetes.io/name: homepage
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: web
app.kubernetes.io/part-of: homepage
app.kubernetes.io/managed-by: helm
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: homepage
subjects:
- kind: ServiceAccount
name: homepage
namespace: {{ .Release.Namespace }}

View File

@@ -1,52 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: homepage
namespace: {{ .Release.Namespace }}
labels:
app.kubernetes.io/name: homepage
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: web
app.kubernetes.io/part-of: homepage
app.kubernetes.io/managed-by: helm
rules:
- apiGroups:
- ""
resources:
- namespaces
- pods
- nodes
verbs:
- get
- list
- apiGroups:
- extensions
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- apiGroups:
- traefik.containo.us
- traefik.io
resources:
- ingressroutes
verbs:
- get
- list
- apiGroups:
- metrics.k8s.io
resources:
- nodes
- pods
verbs:
- get
- list
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions/status
verbs:
- get

View File

@@ -1,37 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: homepage-config
namespace: {{ .Release.Namespace }}
labels:
app.kubernetes.io/name: homepage
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: web
app.kubernetes.io/part-of: homepage
app.kubernetes.io/managed-by: helm
data:
bookmarks.yaml: {{- if .Values.config.bookmarks }} |
{{- .Values.config.bookmarks | toYaml | nindent 4}}
{{- else }} ""
{{- end }}
docker.yaml: {{- if .Values.config.docker }} |
{{- .Values.config.docker | toYaml | nindent 4 }}
{{- else }} ""
{{- end }}
kubernetes.yaml: {{- if .Values.config.kubernetes }} |
{{- .Values.config.kubernetes | toYaml | nindent 4 }}
{{- else }} ""
{{- end }}
services.yaml: {{- if .Values.config.services }} |
{{- .Values.config.services | toYaml | nindent 4 }}
{{- else }} ""
{{- end }}
settings.yaml: {{- if .Values.config.settings }} |
{{- .Values.config.settings | toYaml | nindent 4 }}
{{- else }} ""
{{- end }}
widgets.yaml: {{- if .Values.config.widgets }} |
{{- .Values.config.widgets | toYaml | nindent 4 }}
{{- else }} ""
{{- end }}

View File

@@ -1,92 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: homepage
namespace: {{ .Release.Namespace }}
labels:
app.kubernetes.io/name: homepage
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: web
app.kubernetes.io/part-of: homepage
app.kubernetes.io/managed-by: helm
spec:
revisionHistoryLimit: 3
replicas: {{ .Values.deployment.replicas }}
strategy:
type: {{ .Values.deployment.strategy }}
selector:
matchLabels:
app.kubernetes.io/name: homepage
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: homepage
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
serviceAccountName: homepage
automountServiceAccountToken: true
containers:
- name: {{ .Release.Name }}
image: "{{ .Values.deployment.image.repository }}:{{ .Values.deployment.image.tag }}"
imagePullPolicy: {{ .Values.deployment.image.imagePullPolicy }}
ports:
- name: http
containerPort: {{ .Values.service.http.port }}
protocol: TCP
env:
{{- range $k,$v := .Values.deployment.env }}
- name: {{ $k }}
value: {{ $v | quote }}
{{- end }}
volumeMounts:
- name: homepage-config
subPath: bookmarks.yaml
mountPath: /app/config/bookmarks.yaml
- name: homepage-config
subPath: docker.yaml
mountPath: /app/config/docker.yaml
- name: homepage-config
subPath: kubernetes.yaml
mountPath: /app/config/kubernetes.yaml
- name: homepage-config
subPath: services.yaml
mountPath: /app/config/services.yaml
- name: homepage-config
subPath: settings.yaml
mountPath: /app/config/settings.yaml
- name: homepage-config
subPath: widgets.yaml
mountPath: /app/config/widgets.yaml
- name: logs
mountPath: /app/config/logs
resources:
{{- toYaml .Values.deployment.resources | nindent 12 }}
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 0
periodSeconds: 10
tcpSocket:
port: {{ .Values.service.http.port }}
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 0
periodSeconds: 10
tcpSocket:
port: {{ .Values.service.http.port }}
timeoutSeconds: 1
startupProbe:
failureThreshold: 30
initialDelaySeconds: 0
periodSeconds: 5
tcpSocket:
port: {{ .Values.service.http.port }}
timeoutSeconds: 1
volumes:
- name: homepage-config
configMap:
name: homepage-config
- name: logs
emptyDir: {}

View File

@@ -1,33 +0,0 @@
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: homepage
namespace: {{ .Release.Namespace }}
labels:
app.kubernetes.io/name: homepage
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: web
app.kubernetes.io/part-of: homepage
app.kubernetes.io/managed-by: helm
spec:
entryPoints:
- websecure
routes:
- kind: Rule
match: "Host(`{{ .Values.ingressRoute.host }}`)"
middlewares:
- name: authentik
namespace: {{ .Release.Namespace }}
priority: 10
services:
- kind: Service
name: homepage
port: {{ .Values.service.http.port }}
- kind: Rule
match: "Host(`{{ .Values.ingressRoute.host }}`) && PathPrefix(`/outpost.goauthentik.io/`)"
priority: 15
services:
- kind: Service
name: {{ .Values.ingressRoute.authentik.outpost }}
port: {{ .Values.ingressRoute.authentik.port }}

View File

@@ -1,28 +0,0 @@
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: authentik
namespace: {{ .Release.Namespace }}
labels:
app.kubernetes.io/name: homepage
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: auth
app.kubernetes.io/part-of: homepage
app.kubernetes.io/managed-by: helm
spec:
forwardAuth:
address: "http://{{ .Values.ingressRoute.authentik.outpost }}.authentik:{{ .Values.ingressRoute.authentik.port }}/outpost.goauthentik.io/auth/traefik"
trustForwardHeader: true
authResponseHeaders:
- X-authentik-username
- X-authentik-groups
- X-authentik-email
- X-authentik-name
- X-authentik-uid
- X-authentik-jwt
- X-authentik-meta-jwks
- X-authentik-meta-outpost
- X-authentik-meta-provider
- X-authentik-meta-app
- X-authentik-meta-version

View File

@@ -1,15 +0,0 @@
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: "{{ .Release.Name }}-sa-token"
namespace: {{ .Release.Namespace }}
labels:
app.kubernetes.io/name: homepage
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: web
app.kubernetes.io/part-of: homepage
app.kubernetes.io/managed-by: helm
annotations:
kubernetes.io/service-account.name: homepage

View File

@@ -1,14 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: homepage
namespace: {{ .Release.Namespace }}
labels:
app.kubernetes.io/name: homepage
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: web
app.kubernetes.io/part-of: homepage
app.kubernetes.io/managed-by: helm
secrets:
- name: "{{ .Release.Name }}-sa-token"

View File

@@ -1,22 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: homepage
namespace: {{ .Release.Namespace }}
labels:
app.kubernetes.io/name: homepage
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/component: web
app.kubernetes.io/part-of: homepage
app.kubernetes.io/managed-by: helm
spec:
type: ClusterIP
ports:
- port: {{ .Values.service.http.port }}
targetPort: http
protocol: TCP
name: http
selector:
app.kubernetes.io/name: homepage
app.kubernetes.io/instance: {{ .Release.Name }}

View File

@@ -1,31 +0,0 @@
deployment:
replicas: 1
strategy: Recreate
image:
repository: ghcr.io/benphelps/homepage
tag: v0.8.7
imagePullPolicy: IfNotPresent
env:
resources:
requests:
memory: 50Mi
cpu: 10m
limits:
memory: 200Mi
cpu: 500m
service:
http:
port: 3000
ingressRoute:
host: homepage.alexlebens.net
authentik:
outpost: authentik-proxy-outpost
port: 9000
config:
bookmarks:
services:
widgets:
kubernetes:
mode: cluster
docker:
settings:

View File

@@ -0,0 +1,14 @@
apiVersion: v2
name: postgres-cluster
version: 6.4.4
description: Cloudnative-pg Cluster
keywords:
- database
- postgres
sources:
- https://github.com/cloudnative-pg/cloudnative-pg
- https://github.com/cloudnative-pg/charts/tree/main/charts/cluster
maintainers:
- name: alexlebens
icon: https://avatars.githubusercontent.com/u/100373852?s=48&v=4
appVersion: v1.26.0

View File

@@ -0,0 +1,115 @@
# postgres-cluster
![Version: 6.4.4](https://img.shields.io/badge/Version-6.4.4-informational?style=flat-square) ![AppVersion: v1.26.0](https://img.shields.io/badge/AppVersion-v1.26.0-informational?style=flat-square)
Cloudnative-pg Cluster
## Maintainers
| Name | Email | Url |
| ---- | ------ | --- |
| alexlebens | | |
## Source Code
* <https://github.com/cloudnative-pg/cloudnative-pg>
* <https://github.com/cloudnative-pg/charts/tree/main/charts/cluster>
## Values
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| backup | object | `{"enabled":false,"method":"objectStore","objectStore":[],"scheduledBackups":[]}` | Backup settings |
| backup.enabled | bool | `false` | You need to configure backups manually, so backups are disabled by default. |
| backup.method | string | `"objectStore"` | Method to create backups, options currently are only objectStore |
| backup.objectStore | list | `[]` | Options for object store backups |
| backup.scheduledBackups | list | `[]` | List of scheduled backups |
| cluster | object | `{"additionalLabels":{},"affinity":{"enablePodAntiAffinity":true,"topologyKey":"kubernetes.io/hostname"},"annotations":{},"certificates":{},"enablePDB":true,"enableSuperuserAccess":false,"image":{"repository":"ghcr.io/cloudnative-pg/postgresql","tag":"17.5-1-bullseye"},"imagePullPolicy":"IfNotPresent","imagePullSecrets":[],"initdb":{},"instances":3,"logLevel":"info","monitoring":{"customQueries":[],"customQueriesSecret":[],"disableDefaultQueries":false,"enabled":false,"podMonitor":{"enabled":true,"metricRelabelings":[],"relabelings":[]},"prometheusRule":{"enabled":false,"excludeRules":[]}},"postgresGID":-1,"postgresUID":-1,"postgresql":{"ldap":{},"parameters":{"hot_standby_feedback":"on","max_slot_wal_keep_size":"2000MB","shared_buffers":"128MB"},"pg_hba":[],"pg_ident":[],"shared_preload_libraries":[],"synchronous":{}},"primaryUpdateMethod":"switchover","primaryUpdateStrategy":"unsupervised","priorityClassName":"","resources":{"limits":{"hugepages-2Mi":"256Mi"},"requests":{"cpu":"100m","memory":"256Mi"}},"roles":[],"serviceAccountTemplate":{},"services":{},"storage":{"size":"10Gi","storageClass":""},"superuserSecret":"","walStorage":{"enabled":true,"size":"2Gi","storageClass":""}}` | Cluster settings |
| cluster.affinity | object | `{"enablePodAntiAffinity":true,"topologyKey":"kubernetes.io/hostname"}` | Affinity/Anti-affinity rules for Pods. See: https://cloudnative-pg.io/documentation/current/cloudnative-pg.v1/#postgresql-cnpg-io-v1-AffinityConfiguration |
| cluster.certificates | object | `{}` | The configuration for the CA and related certificates. See: https://cloudnative-pg.io/documentation/current/cloudnative-pg.v1/#postgresql-cnpg-io-v1-CertificatesConfiguration |
| cluster.enablePDB | bool | `true` | Allow to disable PDB, mainly useful for upgrade of single-instance clusters or development purposes See: https://cloudnative-pg.io/documentation/current/kubernetes_upgrade/#pod-disruption-budgets |
| cluster.enableSuperuserAccess | bool | `false` | When this option is enabled, the operator will use the SuperuserSecret to update the postgres user password. If the secret is not present, the operator will automatically create one. When this option is disabled, the operator will ignore the SuperuserSecret content, delete it when automatically created, and then blank the password of the postgres user by setting it to NULL. |
| cluster.image | object | `{"repository":"ghcr.io/cloudnative-pg/postgresql","tag":"17.5-1-bullseye"}` | Default image |
| cluster.imagePullPolicy | string | `"IfNotPresent"` | Image pull policy. One of Always, Never or IfNotPresent. If not defined, it defaults to IfNotPresent. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images |
| cluster.imagePullSecrets | list | `[]` | The list of pull secrets to be used to pull the images. See: https://cloudnative-pg.io/documentation/current/cloudnative-pg.v1/#postgresql-cnpg-io-v1-LocalObjectReference |
| cluster.initdb | object | `{}` | Bootstrap is the configuration of the bootstrap process when initdb is used. See: https://cloudnative-pg.io/documentation/current/bootstrap/ See: https://cloudnative-pg.io/documentation/current/cloudnative-pg.v1/#postgresql-cnpg-io-v1-bootstrapinitdb |
| cluster.logLevel | string | `"info"` | The instances' log level, one of the following values: error, warning, info (default), debug, trace |
| cluster.monitoring | object | `{"customQueries":[],"customQueriesSecret":[],"disableDefaultQueries":false,"enabled":false,"podMonitor":{"enabled":true,"metricRelabelings":[],"relabelings":[]},"prometheusRule":{"enabled":false,"excludeRules":[]}}` | Enable default monitoring and alert rules |
| cluster.monitoring.customQueries | list | `[]` | Custom Prometheus metrics Will be stored in the ConfigMap |
| cluster.monitoring.customQueriesSecret | list | `[]` | The list of secrets containing the custom queries |
| cluster.monitoring.disableDefaultQueries | bool | `false` | Whether the default queries should be injected. Set it to true if you don't want to inject default queries into the cluster. |
| cluster.monitoring.enabled | bool | `false` | Whether to enable monitoring |
| cluster.monitoring.podMonitor.enabled | bool | `true` | Whether to enable the PodMonitor |
| cluster.monitoring.podMonitor.metricRelabelings | list | `[]` | The list of metric relabelings for the PodMonitor. Applied to samples before ingestion. |
| cluster.monitoring.podMonitor.relabelings | list | `[]` | The list of relabelings for the PodMonitor. Applied to samples before scraping. |
| cluster.monitoring.prometheusRule.enabled | bool | `false` | Whether to enable the PrometheusRule automated alerts |
| cluster.monitoring.prometheusRule.excludeRules | list | `[]` | Exclude specified rules |
| cluster.postgresUID | int | `-1` | The UID and GID of the postgres user inside the image, defaults to 26 |
| cluster.postgresql | object | `{"ldap":{},"parameters":{"hot_standby_feedback":"on","max_slot_wal_keep_size":"2000MB","shared_buffers":"128MB"},"pg_hba":[],"pg_ident":[],"shared_preload_libraries":[],"synchronous":{}}` | Parameters to be set for the database itself See: https://cloudnative-pg.io/documentation/current/cloudnative-pg.v1/#postgresql-cnpg-io-v1-PostgresConfiguration |
| cluster.postgresql.ldap | object | `{}` | PostgreSQL LDAP configuration (see https://cloudnative-pg.io/documentation/current/postgresql_conf/#ldap-configuration) |
| cluster.postgresql.parameters | object | `{"hot_standby_feedback":"on","max_slot_wal_keep_size":"2000MB","shared_buffers":"128MB"}` | PostgreSQL configuration options (postgresql.conf) |
| cluster.postgresql.pg_hba | list | `[]` | PostgreSQL Host Based Authentication rules (lines to be appended to the pg_hba.conf file) |
| cluster.postgresql.pg_ident | list | `[]` | PostgreSQL User Name Maps rules (lines to be appended to the pg_ident.conf file) |
| cluster.postgresql.shared_preload_libraries | list | `[]` | Lists of shared preload libraries to add to the default ones |
| cluster.postgresql.synchronous | object | `{}` | Quorum-based Synchronous Replication |
| cluster.primaryUpdateMethod | string | `"switchover"` | Method to follow to upgrade the primary server during a rolling update procedure, after all replicas have been successfully updated. It can be switchover (default) or restart. |
| cluster.primaryUpdateStrategy | string | `"unsupervised"` | Strategy to follow to upgrade the primary server during a rolling update procedure, after all replicas have been successfully updated: it can be automated (unsupervised - default) or manual (supervised) |
| cluster.resources | object | `{"limits":{"hugepages-2Mi":"256Mi"},"requests":{"cpu":"100m","memory":"256Mi"}}` | Resources requirements of every generated Pod. Please refer to https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ for more information. We strongly advise you use the same setting for limits and requests so that your cluster pods are given a Guaranteed QoS. See: https://kubernetes.io/docs/concepts/workloads/pods/pod-qos/ |
| cluster.roles | list | `[]` | This feature enables declarative management of existing roles, as well as the creation of new roles if they are not already present in the database. See: https://cloudnative-pg.io/documentation/current/declarative_role_management/ |
| cluster.serviceAccountTemplate | object | `{}` | Configure the metadata of the generated service account |
| cluster.services | object | `{}` | Customization of service definitions. Please refer to https://cloudnative-pg.io/documentation/current/service_management/ |
| cluster.storage | object | `{"size":"10Gi","storageClass":""}` | Default storage size |
| mode | string | `"standalone"` | Cluster mode of operation. Available modes: * `standalone` - Default mode. Creates new or updates an existing CNPG cluster. * `recovery` - Same as standalone but creates a cluster from a backup, object store or via pg_basebackup |
| nameOverride | string | `""` | Override the name of the cluster |
| namespaceOverride | string | `""` | Override the namespace of the chart |
| poolers | list | `[]` | List of PgBouncer poolers |
| recovery | object | `{"backup":{"backupName":"","database":"app","owner":"","pitrTarget":{"time":""}},"import":{"databases":[],"pgDumpExtraOptions":[],"pgRestoreExtraOptions":[],"postImportApplicationSQL":[],"roles":[],"schemaOnly":false,"source":{"database":"app","host":"","passwordSecret":{"create":false,"key":"password","name":"","value":""},"port":5432,"sslCertSecret":{"key":"","name":""},"sslKeySecret":{"key":"","name":""},"sslMode":"verify-full","sslRootCertSecret":{"key":"","name":""},"username":"app"},"type":"microservice"},"method":"backup","objectStore":{"clusterName":"","data":{"compression":"snappy","encryption":"","jobs":1},"database":"app","destinationPath":"","endpointCA":{"create":false,"key":"","name":""},"endpointCredentials":"","endpointURL":"https://nyc3.digitaloceanspaces.com","index":1,"name":"recovery","owner":"","pitrTarget":{"time":""},"wal":{"compression":"snappy","encryption":"","maxParallel":1}},"pgBaseBackup":{"database":"app","owner":"","secret":"","source":{"database":"app","host":"","passwordSecret":{"create":false,"key":"password","name":"","value":""},"port":5432,"sslCertSecret":{"key":"","name":""},"sslKeySecret":{"key":"","name":""},"sslMode":"verify-full","sslRootCertSecret":{"key":"","name":""},"username":""}}}` | Recovery settings when booting cluster from external cluster |
| recovery.backup.backupName | string | `""` | Name of the backup to recover from. |
| recovery.backup.database | string | `"app"` | Name of the database used by the application. Default: `app`. |
| recovery.backup.owner | string | `""` | Name of the owner of the database in the instance to be used by applications. Defaults to the value of the `database` key. |
| recovery.backup.pitrTarget | object | `{"time":""}` | Point in time recovery target. Specify one of the following: |
| recovery.backup.pitrTarget.time | string | `""` | Time in RFC3339 format |
| recovery.import.databases | list | `[]` | Databases to import |
| recovery.import.pgDumpExtraOptions | list | `[]` | List of custom options to pass to the `pg_dump` command. IMPORTANT: Use these options with caution and at your own risk, as the operator does not validate their content. Be aware that certain options may conflict with the operator's intended functionality or design. |
| recovery.import.pgRestoreExtraOptions | list | `[]` | List of custom options to pass to the `pg_restore` command. IMPORTANT: Use these options with caution and at your own risk, as the operator does not validate their content. Be aware that certain options may conflict with the operator's intended functionality or design. |
| recovery.import.postImportApplicationSQL | list | `[]` | List of SQL queries to be executed as a superuser in the application database right after is imported. To be used with extreme care. Only available in microservice type. |
| recovery.import.roles | list | `[]` | Roles to import |
| recovery.import.schemaOnly | bool | `false` | When set to true, only the pre-data and post-data sections of pg_restore are invoked, avoiding data import. |
| recovery.import.source | object | `{"database":"app","host":"","passwordSecret":{"create":false,"key":"password","name":"","value":""},"port":5432,"sslCertSecret":{"key":"","name":""},"sslKeySecret":{"key":"","name":""},"sslMode":"verify-full","sslRootCertSecret":{"key":"","name":""},"username":"app"}` | Configuration for the source database |
| recovery.import.source.passwordSecret.create | bool | `false` | Whether to create a secret for the password |
| recovery.import.source.passwordSecret.key | string | `"password"` | The key in the secret containing the password |
| recovery.import.source.passwordSecret.name | string | `""` | Name of the secret containing the password |
| recovery.import.source.passwordSecret.value | string | `""` | The password value to use when creating the secret |
| recovery.import.type | string | `"microservice"` | One of `microservice` or `monolith.` See: https://cloudnative-pg.io/documentation/current/database_import/#how-it-works |
| recovery.method | string | `"backup"` | Available recovery methods: * `backup` - Recovers a CNPG cluster from a CNPG backup (PITR supported) Needs to be on the same cluster in the same namespace. * `objectStore` - Recovers a CNPG cluster from a barman object store (PITR supported). * `pgBaseBackup` - Recovers a CNPG cluster viaa streaming replication protocol. Useful if you want to migrate databases to CloudNativePG, even from outside Kubernetes. * `import` - Import one or more databases from an existing Postgres cluster. |
| recovery.objectStore.clusterName | string | `""` | Override the name of the backup cluster, defaults to "cluster.name" |
| recovery.objectStore.data.compression | string | `"snappy"` | Data compression method. One of `` (for no compression), `gzip`, `bzip2` or `snappy`. |
| recovery.objectStore.data.encryption | string | `""` | Whether to instruct the storage provider to encrypt data files. One of `` (use the storage container default), `AES256` or `aws:kms`. |
| recovery.objectStore.data.jobs | int | `1` | Number of data files to be archived or restored in parallel. |
| recovery.objectStore.database | string | `"app"` | Name of the database used by the application. Default: `app`. |
| recovery.objectStore.destinationPath | string | `""` | Overrides the provider specific default path. Defaults to: S3: s3://<bucket><path> Azure: https://<storageAccount>.<serviceName>.core.windows.net/<containerName><path> Google: gs://<bucket><path> |
| recovery.objectStore.endpointCA | object | `{"create":false,"key":"","name":""}` | Specifies a CA bundle to validate a privately signed certificate. |
| recovery.objectStore.endpointCA.create | bool | `false` | Creates a secret with the given value if true, otherwise uses an existing secret. |
| recovery.objectStore.endpointCredentials | string | `""` | Specifies secret that contains S3 credentials, should contain the keys ACCESS_KEY_ID and ACCESS_SECRET_KEY |
| recovery.objectStore.endpointURL | string | `"https://nyc3.digitaloceanspaces.com"` | Overrides the provider specific default endpoint. Defaults to: S3: https://s3.<region>.amazonaws.com" Leave empty if using the default S3 endpoint |
| recovery.objectStore.index | int | `1` | Generate external cluster name, uses: {{ .Release.Name }}-postgresql-<major version>-backup-index-{{ index }} |
| recovery.objectStore.name | string | `"recovery"` | Object store backup name |
| recovery.objectStore.owner | string | `""` | Name of the owner of the database in the instance to be used by applications. Defaults to the value of the `database` key. |
| recovery.objectStore.pitrTarget | object | `{"time":""}` | Point in time recovery target. Specify one of the following: |
| recovery.objectStore.pitrTarget.time | string | `""` | Time in RFC3339 format |
| recovery.objectStore.wal | object | `{"compression":"snappy","encryption":"","maxParallel":1}` | Storage |
| recovery.objectStore.wal.compression | string | `"snappy"` | WAL compression method. One of `` (for no compression), `gzip`, `bzip2` or `snappy`. |
| recovery.objectStore.wal.encryption | string | `""` | Whether to instruct the storage provider to encrypt WAL files. One of `` (use the storage container default), `AES256` or `aws:kms`. |
| recovery.objectStore.wal.maxParallel | int | `1` | Number of WAL files to be archived or restored in parallel. |
| recovery.pgBaseBackup.database | string | `"app"` | Name of the database used by the application. Default: `app`. |
| recovery.pgBaseBackup.owner | string | `""` | Name of the owner of the database in the instance to be used by applications. Defaults to the value of the `database` key. |
| recovery.pgBaseBackup.secret | string | `""` | Name of the secret containing the initial credentials for the owner of the user database. If empty a new secret will be created from scratch |
| recovery.pgBaseBackup.source | object | `{"database":"app","host":"","passwordSecret":{"create":false,"key":"password","name":"","value":""},"port":5432,"sslCertSecret":{"key":"","name":""},"sslKeySecret":{"key":"","name":""},"sslMode":"verify-full","sslRootCertSecret":{"key":"","name":""},"username":""}` | Configuration for the source database |
| recovery.pgBaseBackup.source.passwordSecret.create | bool | `false` | Whether to create a secret for the password |
| recovery.pgBaseBackup.source.passwordSecret.key | string | `"password"` | The key in the secret containing the password |
| recovery.pgBaseBackup.source.passwordSecret.name | string | `""` | Name of the secret containing the password |
| recovery.pgBaseBackup.source.passwordSecret.value | string | `""` | The password value to use when creating the secret |
| type | string | `"postgresql"` | Type of the CNPG database. Available types: * `postgresql` * `tensorchord` |
----------------------------------------------
Autogenerated from chart metadata using [helm-docs v1.14.2](https://github.com/norwoodj/helm-docs/releases/v1.14.2)

View File

@@ -0,0 +1,16 @@
{{- $alert := "CNPGClusterBackendsWaitingWarning" -}}
{{- if not (has $alert .excludeRules) -}}
alert: {{ $alert }}
annotations:
summary: CNPG Cluster a backend is waiting for longer than 5 minutes.
description: |-
Pod {{`{{`}} $labels.pod {{`}}`}}
has been waiting for longer than 5 minutes
expr: |
cnpg_backends_waiting_total > 300
for: 1m
labels:
severity: warning
namespace: {{ .namespace }}
cnpg_cluster: {{ .cluster }}
{{- end -}}

View File

@@ -0,0 +1,16 @@
{{- $alert := "CNPGClusterDatabaseDeadlockConflictsWarning" -}}
{{- if not (has $alert .excludeRules) -}}
alert: {{ $alert }}
annotations:
summary: CNPG Cluster has over 10 deadlock conflicts.
description: |-
There are over 10 deadlock conflicts in
{{`{{`}} $labels.pod {{`}}`}}
expr: |
cnpg_pg_stat_database_deadlocks > 10
for: 1m
labels:
severity: warning
namespace: {{ .namespace }}
cnpg_cluster: {{ .cluster }}
{{- end -}}

View File

@@ -0,0 +1,26 @@
{{- $alert := "CNPGClusterHACritical" -}}
{{- if not (has $alert .excludeRules) -}}
alert: {{ $alert }}
annotations:
summary: CNPG Cluster has no standby replicas!
description: |-
CloudNativePG Cluster "{{ .labels.job }}" has no ready standby replicas. Your cluster at a severe
risk of data loss and downtime if the primary instance fails.
The primary instance is still online and able to serve queries, although connections to the `-ro` endpoint
will fail. The `-r` endpoint os operating at reduced capacity and all traffic is being served by the main.
This can happen during a normal fail-over or automated minor version upgrades in a cluster with 2 or less
instances. The replaced instance may need some time to catch-up with the cluster primary instance.
This alarm will be always trigger if your cluster is configured to run with only 1 instance. In this
case you may want to silence it.
runbook_url: https://github.com/cloudnative-pg/charts/blob/main/charts/cluster/docs/runbooks/CNPGClusterHACritical.md
expr: |
max by (job) (cnpg_pg_replication_streaming_replicas{namespace="{{ .namespace }}"} - cnpg_pg_replication_is_wal_receiver_up{namespace="{{ .namespace }}"}) < 1
for: 5m
labels:
severity: critical
namespace: {{ .namespace }}
cnpg_cluster: {{ .cluster }}
{{- end -}}

View File

@@ -0,0 +1,24 @@
{{- $alert := "CNPGClusterHAWarning" -}}
{{- if not (has $alert .excludeRules) -}}
alert: {{ $alert }}
annotations:
summary: CNPG Cluster less than 2 standby replicas.
description: |-
CloudNativePG Cluster "{{ .labels.job }}" has only {{ .value }} standby replicas, putting
your cluster at risk if another instance fails. The cluster is still able to operate normally, although
the `-ro` and `-r` endpoints operate at reduced capacity.
This can happen during a normal fail-over or automated minor version upgrades. The replaced instance may
need some time to catch-up with the cluster primary instance.
This alarm will be constantly triggered if your cluster is configured to run with less than 3 instances.
In this case you may want to silence it.
runbook_url: https://github.com/cloudnative-pg/charts/blob/main/charts/cluster/docs/runbooks/CNPGClusterHAWarning.md
expr: |
max by (job) (cnpg_pg_replication_streaming_replicas{namespace="{{ .namespace }}"} - cnpg_pg_replication_is_wal_receiver_up{namespace="{{ .namespace }}"}) < 2
for: 5m
labels:
severity: warning
namespace: {{ .namespace }}
cnpg_cluster: {{ .cluster }}
{{- end -}}

View File

@@ -0,0 +1,17 @@
{{- $alert := "CNPGClusterHighConnectionsCritical" -}}
{{- if not (has $alert .excludeRules) -}}
alert: {{ $alert }}
annotations:
summary: CNPG Instance maximum number of connections critical!
description: |-
CloudNativePG Cluster "{{ .namespace }}/{{ .cluster }}" instance {{ .labels.pod }} is using {{ .value }}% of
the maximum number of connections.
runbook_url: https://github.com/cloudnative-pg/charts/blob/main/charts/cluster/docs/runbooks/CNPGClusterHighConnectionsCritical.md
expr: |
sum by (pod) (cnpg_backends_total{namespace="{{ .namespace }}", pod=~"{{ .podSelector }}"}) / max by (pod) (cnpg_pg_settings_setting{name="max_connections", namespace="{{ .namespace }}", pod=~"{{ .podSelector }}"}) * 100 > 95
for: 5m
labels:
severity: critical
namespace: {{ .namespace }}
cnpg_cluster: {{ .cluster }}
{{- end -}}

View File

@@ -0,0 +1,17 @@
{{- $alert := "CNPGClusterHighConnectionsWarning" -}}
{{- if not (has $alert .excludeRules) -}}
alert: {{ $alert }}
annotations:
summary: CNPG Instance is approaching the maximum number of connections.
description: |-
CloudNativePG Cluster "{{ .namespace }}/{{ .cluster }}" instance {{ .labels.pod }} is using {{ .value }}% of
the maximum number of connections.
runbook_url: https://github.com/cloudnative-pg/charts/blob/main/charts/cluster/docs/runbooks/CNPGClusterHighConnectionsWarning.md
expr: |
sum by (pod) (cnpg_backends_total{namespace="{{ .namespace }}", pod=~"{{ .podSelector }}"}) / max by (pod) (cnpg_pg_settings_setting{name="max_connections", namespace="{{ .namespace }}", pod=~"{{ .podSelector }}"}) * 100 > 80
for: 5m
labels:
severity: warning
namespace: {{ .namespace }}
cnpg_cluster: {{ .cluster }}
{{- end -}}

View File

@@ -0,0 +1,19 @@
{{- $alert := "CNPGClusterHighReplicationLag" -}}
{{- if not (has $alert .excludeRules) -}}
alert: {{ $alert }}
annotations:
summary: CNPG Cluster high replication lag
description: |-
CloudNativePG Cluster "{{ .namespace }}/{{ .cluster }}" is experiencing a high replication lag of
{{ .value }}ms.
High replication lag indicates network issues, busy instances, slow queries or suboptimal configuration.
runbook_url: https://github.com/cloudnative-pg/charts/blob/main/charts/cluster/docs/runbooks/CNPGClusterHighReplicationLag.md
expr: |
max(cnpg_pg_replication_lag{namespace="{{ .namespace }}",pod=~"{{ .podSelector }}"}) * 1000 > 1000
for: 5m
labels:
severity: warning
namespace: {{ .namespace }}
cnpg_cluster: {{ .cluster }}
{{- end -}}

View File

@@ -0,0 +1,19 @@
{{- $alert := "CNPGClusterInstancesOnSameNode" -}}
{{- if not (has $alert .excludeRules) -}}
alert: {{ $alert }}
annotations:
summary: CNPG Cluster instances are located on the same node.
description: |-
CloudNativePG Cluster "{{ .namespace }}/{{ .cluster }}" has {{ .value }}
instances on the same node {{ .labels.node }}.
A failure or scheduled downtime of a single node will lead to a potential service disruption and/or data loss.
runbook_url: https://github.com/cloudnative-pg/charts/blob/main/charts/cluster/docs/runbooks/CNPGClusterInstancesOnSameNode.md
expr: |
count by (node) (kube_pod_info{namespace="{{ .namespace }}", pod=~"{{ .podSelector }}"}) > 1
for: 5m
labels:
severity: warning
namespace: {{ .namespace }}
cnpg_cluster: {{ .cluster }}
{{- end -}}

View File

@@ -0,0 +1,15 @@
{{- $alert := "CNPGClusterLastFailedArchiveTimeWarning" -}}
{{- if not (has $alert .excludeRules) -}}
alert: {{ $alert }}
annotations:
summary: CNPG Cluster last time archiving failed.
description: |-
Archiving failed for {{`{{`}} $labels.pod {{`}}`}}
expr: |
(cnpg_pg_stat_archiver_last_failed_time - cnpg_pg_stat_archiver_last_archived_time) > 1
for: 1m
labels:
severity: warning
namespace: {{ .namespace }}
cnpg_cluster: {{ .cluster }}
{{- end -}}

View File

@@ -0,0 +1,16 @@
{{- $alert := "CNPGClusterLongRunningTransactionWarning" -}}
{{- if not (has $alert .excludeRules) -}}
alert: {{ $alert }}
annotations:
summary: CNPG Cluster query is taking longer than 5 minutes.
description: |-
CloudNativePG Cluster Pod {{`{{`}} $labels.pod {{`}}`}}
is taking more than 5 minutes (300 seconds) for a query.
expr: |-
cnpg_backends_max_tx_duration_seconds > 300
for: 1m
labels:
severity: warning
namespace: {{ .namespace }}
cnpg_cluster: {{ .cluster }}
{{- end -}}

View File

@@ -0,0 +1,24 @@
{{- $alert := "CNPGClusterLowDiskSpaceCritical" -}}
{{- if not (has $alert .excludeRules) -}}
alert: {{ $alert }}
annotations:
summary: CNPG Instance is running out of disk space!
description: |-
CloudNativePG Cluster "{{ .namespace }}/{{ .cluster }}" is running extremely low on disk space. Check attached PVCs!
runbook_url: https://github.com/cloudnative-pg/charts/blob/main/charts/cluster/docs/runbooks/CNPGClusterLowDiskSpaceCritical.md
expr: |
max(max by(persistentvolumeclaim) (1 - kubelet_volume_stats_available_bytes{namespace="{{ .namespace }}", persistentvolumeclaim=~"{{ .podSelector }}"} / kubelet_volume_stats_capacity_bytes{namespace="{{ .namespace }}", persistentvolumeclaim=~"{{ .podSelector }}"})) > 0.9 OR
max(max by(persistentvolumeclaim) (1 - kubelet_volume_stats_available_bytes{namespace="{{ .namespace }}", persistentvolumeclaim=~"{{ .podSelector }}-wal"} / kubelet_volume_stats_capacity_bytes{namespace="{{ .namespace }}", persistentvolumeclaim=~"{{ .podSelector }}-wal"})) > 0.9 OR
max(sum by (namespace,persistentvolumeclaim) (kubelet_volume_stats_used_bytes{namespace="{{ .namespace }}", persistentvolumeclaim=~"{{ .podSelector }}-tbs.*"})
/
sum by (namespace,persistentvolumeclaim) (kubelet_volume_stats_capacity_bytes{namespace="{{ .namespace }}", persistentvolumeclaim=~"{{ .podSelector }}-tbs.*"})
*
on(namespace, persistentvolumeclaim) group_left(volume)
kube_pod_spec_volumes_persistentvolumeclaims_info{pod=~"{{ .podSelector }}"}
) > 0.9
for: 5m
labels:
severity: critical
namespace: {{ .namespace }}
cnpg_cluster: {{ .cluster }}
{{- end -}}

View File

@@ -0,0 +1,24 @@
{{- $alert := "CNPGClusterLowDiskSpaceWarning" -}}
{{- if not (has $alert .excludeRules) -}}
alert: {{ $alert }}
annotations:
summary: CNPG Instance is running out of disk space.
description: |-
CloudNativePG Cluster "{{ .namespace }}/{{ .cluster }}" is running low on disk space. Check attached PVCs.
runbook_url: https://github.com/cloudnative-pg/charts/blob/main/charts/cluster/docs/runbooks/CNPGClusterLowDiskSpaceWarning.md
expr: |
max(max by(persistentvolumeclaim) (1 - kubelet_volume_stats_available_bytes{namespace="{{ .namespace }}", persistentvolumeclaim=~"{{ .podSelector }}"} / kubelet_volume_stats_capacity_bytes{namespace="{{ .namespace }}", persistentvolumeclaim=~"{{ .podSelector }}"})) > 0.7 OR
max(max by(persistentvolumeclaim) (1 - kubelet_volume_stats_available_bytes{namespace="{{ .namespace }}", persistentvolumeclaim=~"{{ .podSelector }}-wal"} / kubelet_volume_stats_capacity_bytes{namespace="{{ .namespace }}", persistentvolumeclaim=~"{{ .podSelector }}-wal"})) > 0.7 OR
max(sum by (namespace,persistentvolumeclaim) (kubelet_volume_stats_used_bytes{namespace="{{ .namespace }}", persistentvolumeclaim=~"{{ .podSelector }}-tbs.*"})
/
sum by (namespace,persistentvolumeclaim) (kubelet_volume_stats_capacity_bytes{namespace="{{ .namespace }}", persistentvolumeclaim=~"{{ .podSelector }}-tbs.*"})
*
on(namespace, persistentvolumeclaim) group_left(volume)
kube_pod_spec_volumes_persistentvolumeclaims_info{pod=~"{{ .podSelector }}"}
) > 0.7
for: 5m
labels:
severity: warning
namespace: {{ .namespace }}
cnpg_cluster: {{ .cluster }}
{{- end -}}

View File

@@ -0,0 +1,19 @@
{{- $alert := "CNPGClusterOffline" -}}
{{- if not (has $alert .excludeRules) -}}
alert: {{ $alert }}
annotations:
summary: CNPG Cluster has no running instances!
description: |-
CloudNativePG Cluster "{{ .namespace }}/{{ .cluster }}" has no ready instances.
Having an offline cluster means your applications will not be able to access the database, leading to
potential service disruption and/or data loss.
runbook_url: https://github.com/cloudnative-pg/charts/blob/main/charts/cluster/docs/runbooks/CNPGClusterOffline.md
expr: |
(count(cnpg_collector_up{namespace="{{ .namespace }}",pod=~"{{ .podSelector }}"}) OR on() vector(0)) == 0
for: 5m
labels:
severity: critical
namespace: {{ .namespace }}
cnpg_cluster: {{ .cluster }}
{{- end -}}

View File

@@ -0,0 +1,16 @@
{{- $alert := "CNPGClusterPGDatabaseXidAgeWarning" -}}
{{- if not (has $alert .excludeRules) -}}
alert: {{ $alert }}
annotations:
summary: CNPG Cluster has a number of transactions from the frozen XID to the current one.
description: |-
Over 300,000,000 transactions from frozen xid
on pod {{`{{`}} $labels.pod {{`}}`}}
expr: |
cnpg_pg_database_xid_age > 300000000
for: 1m
labels:
severity: warning
namespace: {{ .namespace }}
cnpg_cluster: {{ .cluster }}
{{- end -}}

View File

@@ -0,0 +1,15 @@
{{- $alert := "CNPGClusterPGReplicationWarning" -}}
{{- if not (has $alert .excludeRules) -}}
alert: {{ $alert }}
annotations:
summary: CNPG Cluster standby is lagging behind the primary.
description: |-
Standby is lagging behind by over 300 seconds (5 minutes)
expr: |
cnpg_pg_replication_lag > 300
for: 1m
labels:
severity: warning
namespace: {{ .namespace }}
cnpg_cluster: {{ .cluster }}
{{- end -}}

View File

@@ -0,0 +1,16 @@
{{- $alert := "CNPGClusterReplicaFailingReplicationWarning" -}}
{{- if not (has $alert .excludeRules) -}}
alert: {{ $alert }}
annotations:
summary: CNPG Cluster has a replica is failing to replicate.
description: |-
Replica {{`{{`}} $labels.pod {{`}}`}}
is failing to replicate
expr: |
cnpg_pg_replication_in_recovery > cnpg_pg_replication_is_wal_receiver_up
for: 1m
labels:
severity: warning
namespace: {{ .namespace }}
cnpg_cluster: {{ .cluster }}
{{- end -}}

View File

@@ -0,0 +1,18 @@
{{- $alert := "CNPGClusterZoneSpreadWarning" -}}
{{- if not (has $alert .excludeRules) -}}
alert: {{ $alert }}
annotations:
summary: CNPG Cluster instances in the same zone.
description: |-
CloudNativePG Cluster "{{ .namespace }}/{{ .cluster }}" has instances in the same availability zone.
A disaster in one availability zone will lead to a potential service disruption and/or data loss.
runbook_url: https://github.com/cloudnative-pg/charts/blob/main/charts/cluster/docs/runbooks/CNPGClusterZoneSpreadWarning.md
expr: |
{{ .Values.cluster.instances }} > count(count by (label_topology_kubernetes_io_zone) (kube_pod_info{namespace="{{ .namespace }}", pod=~"{{ .podSelector }}"} * on(node,instance) group_left(label_topology_kubernetes_io_zone) kube_node_labels)) < 3
for: 5m
labels:
severity: warning
namespace: {{ .namespace }}
cnpg_cluster: {{ .cluster }}
{{- end -}}

View File

@@ -0,0 +1,148 @@
{{- define "cluster.bootstrap" -}}
{{- if eq .Values.mode "standalone" }}
bootstrap:
initdb:
{{- with .Values.cluster.initdb }}
{{- with (omit . "postInitApplicationSQL" "owner" "import") }}
{{- . | toYaml | nindent 4 }}
{{- end }}
{{- end }}
{{- if .Values.cluster.initdb.owner }}
owner: {{ tpl .Values.cluster.initdb.owner . }}
{{- end }}
{{- if eq .Values.type "tensorchord" }}
dataChecksums: true
{{- end }}
{{- if or (eq .Values.type "tensorchord") (.Values.cluster.initdb.postInitApplicationSQL) }}
postInitApplicationSQL:
{{- if eq .Values.type "tensorchord" }}
- ALTER SYSTEM SET search_path TO "$user", public, vectors;
- SET search_path TO "$user", public, vectors;
- CREATE EXTENSION IF NOT EXISTS "vectors";
- CREATE EXTENSION IF NOT EXISTS "cube";
- CREATE EXTENSION IF NOT EXISTS "earthdistance";
- ALTER SCHEMA vectors OWNER TO "app";
- GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA vectors TO "app";
- GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO "app";
{{- end }}
{{- with .Values.cluster.initdb }}
{{- range .postInitApplicationSQL }}
{{- printf "- %s" . | nindent 6 }}
{{- end -}}
{{- end }}
{{- end }}
{{- else if eq .Values.mode "recovery" -}}
bootstrap:
{{- if eq .Values.recovery.method "pgBaseBackup" }}
pg_basebackup:
source: pgBaseBackupSource
{{ with .Values.recovery.pgBaseBackup.database }}
database: {{ . }}
{{- end }}
{{ with .Values.recovery.pgBaseBackup.owner }}
owner: {{ . }}
{{- end }}
{{ with .Values.recovery.pgBaseBackup.secret }}
secret:
{{- toYaml . | nindent 6 }}
{{- end }}
externalClusters:
{{- include "cluster.externalSourceCluster" (list "pgBaseBackupSource" .Values.recovery.pgBaseBackup.source) | nindent 2 }}
{{- else if eq .Values.recovery.method "import" }}
initdb:
{{- with .Values.cluster.initdb }}
{{- with (omit . "owner" "import" "postInitApplicationSQL") }}
{{- . | toYaml | nindent 4 }}
{{- end }}
{{- end }}
{{- if .Values.cluster.initdb.owner }}
owner: {{ tpl .Values.cluster.initdb.owner . }}
{{- end }}
import:
source:
externalCluster: importSource
type: {{ .Values.recovery.import.type }}
databases:
{{- if and (gt (len .Values.recovery.import.databases) 1) (eq .Values.recovery.import.type "microservice") }}
{{ fail "Too many databases in import type of microservice!" }}
{{- else}}
{{- with .Values.recovery.import.databases }}
{{- . | toYaml | nindent 8 }}
{{- end }}
{{- end }}
{{- if eq .Values.recovery.import.type "monolith" }}
roles:
{{- with .Values.replica.importRoles }}
{{- . | toYaml | nindent 8 }}
{{- end }}
{{- end }}
{{- if and (.Values.recovery.import.postImportApplicationSQL) (eq .Values.recovery.import.type "microservice") }}
postImportApplicationSQL:
{{- with .Values.recovery.import.postImportApplicationSQL }}
{{- . | toYaml | nindent 8 }}
{{- end }}
{{- end }}
schemaOnly: {{ .Values.recovery.import.schemaOnly }}
{{ with .Values.recovery.import.pgDumpExtraOptions }}
pgDumpExtraOptions:
{{- . | toYaml | nindent 8 }}
{{- end }}
{{ with .Values.recovery.import.pgRestoreExtraOptions }}
pgRestoreExtraOptions:
{{- . | toYaml | nindent 8 }}
{{- end }}
externalClusters:
{{- include "cluster.externalSourceCluster" (list "importSource" .Values.recovery.import.source) | nindent 2 }}
{{- else if eq .Values.recovery.method "backup" }}
recovery:
{{- with .Values.recovery.backup.pitrTarget.time }}
recoveryTarget:
targetTime: {{ . }}
{{- end }}
{{ with .Values.recovery.backup.database }}
database: {{ . }}
{{- end }}
{{ with .Values.recovery.backup.owner }}
owner: {{ . }}
{{- end }}
backup:
name: {{ .Values.recovery.backup.backupName }}
{{- else if eq .Values.recovery.method "objectStore" }}
recovery:
{{- with .Values.recovery.objectStore.pitrTarget.time }}
recoveryTarget:
targetTime: {{ . }}
{{- end }}
{{ with .Values.recovery.objectStore.database }}
database: {{ . }}
{{- end }}
{{ with .Values.recovery.objectStore.owner }}
owner: {{ . }}
{{- end }}
source: {{ include "cluster.recoveryServerName" . }}
externalClusters:
- name: {{ include "cluster.recoveryServerName" . }}
plugin:
name: barman-cloud.cloudnative-pg.io
enabled: true
isWALArchiver: false
parameters:
barmanObjectName: "{{ include "cluster.name" . }}-{{ .Values.recovery.objectStore.name }}"
serverName: {{ include "cluster.recoveryServerName" . }}
{{- else }}
{{ fail "Invalid recovery mode!" }}
{{- end }}
{{- else }}
{{ fail "Invalid cluster mode!" }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,12 @@
{{- define "cluster.color-error" }}
{{- printf "\033[0;31m%s\033[0m" . -}}
{{- end }}
{{- define "cluster.color-ok" }}
{{- printf "\033[0;32m%s\033[0m" . -}}
{{- end }}
{{- define "cluster.color-warning" }}
{{- printf "\033[0;33m%s\033[0m" . -}}
{{- end }}
{{- define "cluster.color-info" }}
{{- printf "\033[0;34m%s\033[0m" . -}}
{{- end }}

View File

@@ -0,0 +1,33 @@
{{- define "cluster.externalSourceCluster" -}}
{{- $name := first . -}}
{{- $config := last . -}}
- name: {{ first . }}
connectionParameters:
host: {{ $config.host | quote }}
port: {{ $config.port | quote }}
user: {{ $config.username | quote }}
{{- with $config.database }}
dbname: {{ . | quote }}
{{- end }}
sslmode: {{ $config.sslMode | quote }}
{{- if $config.passwordSecret.name }}
password:
name: {{ $config.passwordSecret.name }}
key: {{ $config.passwordSecret.key }}
{{- end }}
{{- if $config.sslKeySecret.name }}
sslKey:
name: {{ $config.sslKeySecret.name }}
key: {{ $config.sslKeySecret.key }}
{{- end }}
{{- if $config.sslCertSecret.name }}
sslCert:
name: {{ $config.sslCertSecret.name }}
key: {{ $config.sslCertSecret.key }}
{{- end }}
{{- if $config.sslRootCertSecret.name }}
sslRootCert:
name: {{ $config.sslRootCertSecret.name }}
key: {{ $config.sslRootCertSecret.key }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,103 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "cluster.name" -}}
{{- if .Values.nameOverride }}
{{- .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-postgresql-%s" .Release.Name ((semver .Values.cluster.image.tag).Major | toString) | trunc 63 | trimSuffix "-" -}}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "cluster.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "cluster.labels" -}}
helm.sh/chart: {{ include "cluster.chart" $ }}
{{ include "cluster.selectorLabels" $ }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.Version | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- with .Values.cluster.additionalLabels }}
{{ toYaml . }}
{{- end }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "cluster.selectorLabels" -}}
app.kubernetes.io/name: {{ include "cluster.name" $ }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/part-of: {{ .Release.Name }}
{{- end }}
{{/*
Allow the release namespace to be overridden for multi-namespace deployments in combined charts
*/}}
{{- define "cluster.namespace" -}}
{{- if .Values.namespaceOverride -}}
{{- .Values.namespaceOverride -}}
{{- else -}}
{{- .Release.Namespace -}}
{{- end -}}
{{- end -}}
{{/*
Postgres UID
*/}}
{{- define "cluster.postgresUID" -}}
{{- if ge (int .Values.cluster.postgresUID) 0 -}}
{{- .Values.cluster.postgresUID }}
{{- else -}}
{{- 26 -}}
{{- end -}}
{{- end -}}
{{/*
Postgres GID
*/}}
{{- define "cluster.postgresGID" -}}
{{- if ge (int .Values.cluster.postgresGID) 0 -}}
{{- .Values.cluster.postgresGID }}
{{- else -}}
{{- 26 -}}
{{- end -}}
{{- end -}}
{{/*
Generate recovery server name
*/}}
{{- define "cluster.recoveryServerName" -}}
{{- if .Values.recovery.recoveryServerName -}}
{{- .Values.recovery.recoveryServerName -}}
{{- else -}}
{{- printf "%s-backup-%s" (include "cluster.name" .) (toString .Values.recovery.objectStore.index) | trunc 63 | trimSuffix "-" -}}
{{- end }}
{{- end }}
{{/*
Generate name for recovery object store credentials
*/}}
{{- define "cluster.recoveryCredentials" -}}
{{- if .Values.recovery.endpointCredentials -}}
{{- .Values.recovery.endpointCredentials -}}
{{- else -}}
{{- printf "%s-backup-secret" (include "cluster.name" .) | trunc 63 | trimSuffix "-" -}}
{{- end }}
{{- end }}
{{/*
Generate name for backup object store credentials
*/}}
{{- define "cluster.backupCredentials" -}}
{{- printf "%s-backup-secret" (include "cluster.name" .) | trunc 63 | trimSuffix "-" -}}
{{- end }}

View File

@@ -0,0 +1,158 @@
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: {{ include "cluster.name" . }}-cluster
namespace: {{ include "cluster.namespace" . }}
{{- with .Values.cluster.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
labels:
{{- include "cluster.labels" . | nindent 4 }}
spec:
instances: {{ .Values.cluster.instances }}
imageName: "{{ .Values.cluster.image.repository }}:{{ .Values.cluster.image.tag }}"
imagePullPolicy: {{ .Values.cluster.imagePullPolicy }}
{{- with .Values.cluster.imagePullSecrets }}
imagePullSecrets:
{{- . | toYaml | nindent 4 }}
{{- end }}
postgresUID: {{ include "cluster.postgresUID" . }}
postgresGID: {{ include "cluster.postgresGID" . }}
{{ if or (eq .Values.backup.method "objectStore") (eq .Values.recovery.method "objectStore") }}
plugins:
{{ end }}
{{- range $objectStore := .Values.backup.objectStore }}
- name: barman-cloud.cloudnative-pg.io
enabled: true
isWALArchiver: {{ $objectStore.isWALArchiver | default true }}
parameters:
barmanObjectName: "{{ include "cluster.name" $ }}-{{ $objectStore.name }}-backup"
{{- if $objectStore.clusterName }}
serverName: "{{ $objectStore.clusterName }}-backup-{{ $objectStore.index }}"
{{- else }}
serverName: "{{ include "cluster.name" $ }}-backup-{{ $objectStore.index }}"
{{- end }}
{{- end }}
{{ if eq .Values.recovery.method "objectStore" }}
- name: barman-cloud.cloudnative-pg.io
enabled: true
isWALArchiver: false
parameters:
barmanObjectName: "{{ include "cluster.name" . }}-{{ .Values.recovery.objectStore.name }}"
serverName: {{ include "cluster.recoveryServerName" . }}
{{ end }}
storage:
size: {{ .Values.cluster.storage.size }}
{{- if not (empty .Values.cluster.storage.storageClass) }}
storageClass: {{ .Values.cluster.storage.storageClass }}
{{- end }}
{{- if .Values.cluster.walStorage.enabled }}
walStorage:
size: {{ .Values.cluster.walStorage.size }}
{{- if not (empty .Values.cluster.walStorage.storageClass) }}
storageClass: {{ .Values.cluster.walStorage.storageClass }}
{{- end }}
{{- end }}
{{- with .Values.cluster.resources }}
resources:
{{- toYaml . | nindent 4 }}
{{ end }}
{{- with .Values.cluster.affinity }}
affinity:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- if .Values.cluster.priorityClassName }}
priorityClassName: {{ .Values.cluster.priorityClassName }}
{{- end }}
primaryUpdateMethod: {{ .Values.cluster.primaryUpdateMethod }}
primaryUpdateStrategy: {{ .Values.cluster.primaryUpdateStrategy }}
logLevel: {{ .Values.cluster.logLevel }}
{{- with .Values.cluster.certificates }}
certificates:
{{- toYaml . | nindent 4 }}
{{ end }}
enableSuperuserAccess: {{ .Values.cluster.enableSuperuserAccess }}
{{- with .Values.cluster.superuserSecret }}
superuserSecret:
name: {{ . }}
{{ end }}
enablePDB: {{ .Values.cluster.enablePDB }}
postgresql:
{{- if or (eq .Values.type "tensorchord") (not (empty .Values.cluster.postgresql.shared_preload_libraries)) }}
shared_preload_libraries:
{{- if eq .Values.type "tensorchord" }}
- vectors.so
{{- end }}
{{- with .Values.cluster.postgresql.shared_preload_libraries }}
{{- toYaml . | nindent 6 }}
{{- end }}
{{- end }}
{{- with .Values.cluster.postgresql.pg_hba }}
pg_hba:
{{- toYaml . | nindent 6 }}
{{- end }}
{{- with .Values.cluster.postgresql.pg_ident }}
pg_ident:
{{- toYaml . | nindent 6 }}
{{- end }}
{{- with .Values.cluster.postgresql.ldap }}
ldap:
{{- toYaml . | nindent 6 }}
{{- end}}
{{- with .Values.cluster.postgresql.synchronous }}
synchronous:
{{- toYaml . | nindent 6 }}
{{ end }}
{{- with .Values.cluster.postgresql.parameters }}
parameters:
{{- toYaml . | nindent 6 }}
{{- end }}
{{- if not (and (empty .Values.cluster.roles) (empty .Values.cluster.services)) }}
managed:
{{- with .Values.cluster.services }}
services:
{{- toYaml . | nindent 6 }}
{{ end }}
{{- with .Values.cluster.roles }}
roles:
{{- toYaml . | nindent 6 }}
{{ end }}
{{- end }}
{{- with .Values.cluster.serviceAccountTemplate }}
serviceAccountTemplate:
{{- toYaml . | nindent 4 }}
{{- end }}
monitoring:
enablePodMonitor: {{ and .Values.cluster.monitoring.enabled .Values.cluster.monitoring.podMonitor.enabled }}
disableDefaultQueries: {{ .Values.cluster.monitoring.disableDefaultQueries }}
{{- if not (empty .Values.cluster.monitoring.customQueries) }}
customQueriesConfigMap:
- name: {{ include "cluster.name" . }}-monitoring
key: custom-queries
{{- end }}
{{- if not (empty .Values.cluster.monitoring.customQueriesSecret) }}
{{- with .Values.cluster.monitoring.customQueriesSecret }}
customQueriesSecret:
{{- toYaml . | nindent 6 }}
{{ end }}
{{- end }}
{{- if not (empty .Values.cluster.monitoring.podMonitor.relabelings) }}
{{- with .Values.cluster.monitoring.podMonitor.relabelings }}
podMonitorRelabelings:
{{- toYaml . | nindent 6 }}
{{ end }}
{{- end }}
{{- if not (empty .Values.cluster.monitoring.podMonitor.metricRelabelings) }}
{{- with .Values.cluster.monitoring.podMonitor.metricRelabelings }}
podMonitorMetricRelabelings:
{{- toYaml . | nindent 6 }}
{{ end }}
{{- end }}
{{ include "cluster.bootstrap" . | nindent 2 }}

View File

@@ -0,0 +1,18 @@
{{- if not (empty .Values.cluster.monitoring.customQueries) }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "cluster.name" $ }}-monitoring
namespace: {{ include "cluster.namespace" $ }}
labels:
cnpg.io/reload: ""
{{- include "cluster.labels" $ | nindent 4 }}
data:
custom-queries: |
{{- range .Values.cluster.monitoring.customQueries }}
{{ .name }}:
query: {{ .query | quote }}
metrics:
{{- .metrics | toYaml | nindent 8 }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,93 @@
{{ if and (.Values.backup.enabled) (eq .Values.backup.method "objectStore") }}
{{ $context := . -}}
{{ range .Values.backup.objectStore -}}
---
apiVersion: barmancloud.cnpg.io/v1
kind: ObjectStore
metadata:
name: "{{ include "cluster.name" $context }}-{{ .name }}-backup"
namespace: {{ include "cluster.namespace" $context }}
labels:
{{- include "cluster.labels" $context | nindent 4 }}
spec:
retentionPolicy: {{ .retentionPolicy | default "30d" }}
configuration:
destinationPath: {{ .destinationPath | required "Destination path is required" }}
endpointURL: {{ .endpointURL | default "https://nyc3.digitaloceanspaces.com" }}
{{- if .endpointCA }}
endpointCA:
name: {{ .endpointCA.name }}
key: {{ .endpointCA.key }}
{{- end }}
{{- if .wal }}
wal:
compression: {{ .wal.compression | default "snappy" }}
{{ with .wal.encryption }}
encryption: {{ . }}
{{ end }}
maxParallel: {{ .wal.maxParallel | default "1" }}
{{- end }}
{{- if .wal }}
data:
compression: {{ .data.compression | default "snappy" }}
{{- with .data.encryption }}
encryption: {{ . }}
{{- end }}
jobs: {{ .data.jobs | default 1 }}
{{- end }}
s3Credentials:
accessKeyId:
{{- if .endpointCredentials }}
name: {{ .endpointCredentials }}
{{- else }}
name: {{ include "cluster.backupCredentials" $context }}
{{- end }}
key: ACCESS_KEY_ID
secretAccessKey:
{{- if .endpointCredentials }}
name: {{ .endpointCredentials }}
{{- else }}
name: {{ include "cluster.backupCredentials" $context }}
{{- end }}
key: ACCESS_SECRET_KEY
{{ end -}}
{{ end }}
{{ if eq .Values.recovery.method "objectStore" }}
---
apiVersion: barmancloud.cnpg.io/v1
kind: ObjectStore
metadata:
name: "{{ include "cluster.name" . }}-{{ .Values.recovery.objectStore.name }}"
namespace: {{ include "cluster.namespace" . }}
labels:
{{- include "cluster.labels" . | nindent 4 }}
spec:
configuration:
destinationPath: {{ .Values.recovery.objectStore.destinationPath }}
endpointURL: {{ .Values.recovery.objectStore.endpointURL }}
{{- if .Values.recovery.objectStore.endpointCA.name }}
endpointCA:
name: {{ .Values.recovery.objectStore.endpointCA.name }}
key: {{ .Values.recovery.objectStore.endpointCA.key }}
{{- end }}
wal:
compression: {{ .Values.recovery.objectStore.wal.compression }}
{{- with .Values.recovery.objectStore.wal.encryption}}
encryption: {{ . }}
{{- end }}
maxParallel: {{ .Values.recovery.objectStore.wal.maxParallel }}
data:
compression: {{ .Values.recovery.objectStore.data.compression }}
{{- with .Values.recovery.objectStore.data.encryption }}
encryption: {{ . }}
{{- end }}
jobs: {{ .Values.recovery.objectStore.data.jobs }}
s3Credentials:
accessKeyId:
name: {{ include "cluster.recoveryCredentials" . }}
key: ACCESS_KEY_ID
secretAccessKey:
name: {{ include "cluster.recoveryCredentials" . }}
key: ACCESS_SECRET_KEY
{{ end }}

View File

@@ -0,0 +1,51 @@
{{- range .Values.poolers }}
---
apiVersion: postgresql.cnpg.io/v1
kind: Pooler
metadata:
name: {{ include "cluster.name" $ }}-pooler-{{ .name }}
namespace: {{ include "cluster.namespace" $ }}
labels:
{{- include "cluster.labels" $ | nindent 4 }}
spec:
cluster:
name: {{ include "cluster.name" $ }}
instances: {{ .instances }}
type: {{ default "rw" .type }}
pgbouncer:
poolMode: {{ default "session" .poolMode }}
{{- with .authQuerySecret }}
authQuerySecret:
{{- toYaml . | nindent 6 }}
{{- end }}
{{- with .authQuery }}
authQuery:
{{- toYaml . | nindent 6 }}
{{- end }}
{{- with .parameters }}
parameters:
{{- toYaml . | nindent 6 }}
{{- end }}
{{- with .pg_hba }}
pg_hba:
{{- toYaml . | nindent 6 }}
{{- end }}
{{ with .monitoring }}
monitoring:
{{- if not (empty .podMonitor) }}
enablePodMonitor: {{ and .enabled .podMonitor.enabled }}
{{- with .podMonitor.relabelings }}
podMonitorRelabelings:
{{- toYaml . | nindent 6 }}
{{ end }}
{{- with .podMonitor.metricRelabelings }}
podMonitorMetricRelabelings:
{{- toYaml . | nindent 6 }}
{{ end }}
{{- end }}
{{- end }}
{{- with .template }}
template:
{{- . | toYaml | nindent 4 }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,27 @@
{{- if and .Values.cluster.monitoring.enabled .Values.cluster.monitoring.prometheusRule.enabled -}}
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: {{ include "cluster.name" $ }}-alert-rules
namespace: {{ include "cluster.namespace" $ }}
labels:
{{- include "cluster.labels" $ | nindent 4 }}
spec:
groups:
- name: cloudnative-pg/{{ include "cluster.name" . }}
rules:
{{- $dict := dict "excludeRules" .Values.cluster.monitoring.prometheusRule.excludeRules -}}
{{- $_ := set $dict "value" "{{`{{`}} $value {{`}}`}}" -}}
{{- $_ := set $dict "namespace" .Release.Namespace -}}
{{- $_ := set $dict "cluster" (printf "%s-cluster" (include "cluster.name" .) ) -}}
{{- $_ := set $dict "labels" (dict "job" "{{`{{`}} $labels.job {{`}}`}}" "node" "{{`{{`}} $labels.node {{`}}`}}" "pod" "{{`{{`}} $labels.pod {{`}}`}}") -}}
{{- $_ := set $dict "podSelector" (printf "%s-cluster-([1-9][0-9]*)$" (include "cluster.name" .) ) -}}
{{- $_ := set $dict "Values" .Values -}}
{{- $_ := set $dict "Template" .Template -}}
{{- range $path, $_ := .Files.Glob "prometheus_rules/**.yaml" }}
{{- $tpl := tpl ($.Files.Get $path) $dict | nindent 10 | trim -}}
{{- with $tpl }}
- {{ $tpl }}
{{- end -}}
{{- end }}
{{ end }}

View File

@@ -0,0 +1,25 @@
{{ if .Values.backup.enabled }}
{{ $context := . -}}
{{ range .Values.backup.scheduledBackups -}}
---
apiVersion: postgresql.cnpg.io/v1
kind: ScheduledBackup
metadata:
name: "{{ include "cluster.name" $context }}-{{ .name }}-scheduled-backup"
namespace: {{ include "cluster.namespace" $context }}
labels:
{{- include "cluster.labels" $context | nindent 4 }}
spec:
immediate: {{ .immediate | default true }}
suspend: {{ .suspend | default false }}
schedule: {{ .schedule | quote | required "Schedule is required" }}
backupOwnerReference: {{ .backupOwnerReference | default "self" }}
cluster:
name: {{ include "cluster.name" $context }}-cluster
method: plugin
pluginConfiguration:
name: {{ .plugin | default "barman-cloud.cloudnative-pg.io" }}
parameters:
barmanObjectName: "{{ include "cluster.name" $context }}-{{ .backupName }}-backup"
{{ end -}}
{{ end }}

View File

@@ -0,0 +1,558 @@
# -- Override the name of the cluster
nameOverride: ""
# -- Override the namespace of the chart
namespaceOverride: ""
# -- Type of the CNPG database. Available types:
# * `postgresql`
# * `tensorchord`
type: postgresql
# -- Cluster mode of operation. Available modes:
# * `standalone` - Default mode. Creates new or updates an existing CNPG cluster.
# * `recovery` - Same as standalone but creates a cluster from a backup, object store or via pg_basebackup
mode: standalone
# -- Cluster settings
cluster:
instances: 3
# -- Default image
image:
repository: ghcr.io/cloudnative-pg/postgresql
tag: "17.5-1-bullseye"
# -- Image pull policy. One of Always, Never or IfNotPresent. If not defined, it defaults to IfNotPresent. Cannot be updated.
# More info: https://kubernetes.io/docs/concepts/containers/images#updating-images
imagePullPolicy: IfNotPresent
# -- The list of pull secrets to be used to pull the images.
# See: https://cloudnative-pg.io/documentation/current/cloudnative-pg.v1/#postgresql-cnpg-io-v1-LocalObjectReference
imagePullSecrets: []
# -- Default storage size
storage:
size: 10Gi
storageClass: ""
walStorage:
enabled: true
size: 2Gi
storageClass: ""
# -- The UID and GID of the postgres user inside the image, defaults to 26
postgresUID: -1
postgresGID: -1
# -- Customization of service definitions. Please refer to https://cloudnative-pg.io/documentation/current/service_management/
services: {}
# -- Resources requirements of every generated Pod.
# Please refer to https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ for more information.
# We strongly advise you use the same setting for limits and requests so that your cluster pods are given a Guaranteed QoS.
# See: https://kubernetes.io/docs/concepts/workloads/pods/pod-qos/
resources:
requests:
memory: 256Mi
cpu: 100m
limits:
hugepages-2Mi: 256Mi
priorityClassName: ""
# -- Method to follow to upgrade the primary server during a rolling update procedure, after all replicas have been
# successfully updated. It can be switchover (default) or restart.
primaryUpdateMethod: switchover
# -- Strategy to follow to upgrade the primary server during a rolling update procedure, after all replicas have been
# successfully updated: it can be automated (unsupervised - default) or manual (supervised)
primaryUpdateStrategy: unsupervised
# -- The instances' log level, one of the following values: error, warning, info (default), debug, trace
logLevel: "info"
# -- Affinity/Anti-affinity rules for Pods.
# See: https://cloudnative-pg.io/documentation/current/cloudnative-pg.v1/#postgresql-cnpg-io-v1-AffinityConfiguration
affinity:
enablePodAntiAffinity: true
topologyKey: kubernetes.io/hostname
# -- The configuration for the CA and related certificates.
# See: https://cloudnative-pg.io/documentation/current/cloudnative-pg.v1/#postgresql-cnpg-io-v1-CertificatesConfiguration
certificates: {}
# -- When this option is enabled, the operator will use the SuperuserSecret to update the postgres user password.
# If the secret is not present, the operator will automatically create one.
# When this option is disabled, the operator will ignore the SuperuserSecret content, delete it when automatically created,
# and then blank the password of the postgres user by setting it to NULL.
enableSuperuserAccess: false
superuserSecret: ""
# -- Allow to disable PDB, mainly useful for upgrade of single-instance clusters or development purposes
# See: https://cloudnative-pg.io/documentation/current/kubernetes_upgrade/#pod-disruption-budgets
enablePDB: true
# -- This feature enables declarative management of existing roles, as well as the creation of new roles if they are not
# already present in the database.
# See: https://cloudnative-pg.io/documentation/current/declarative_role_management/
roles: []
# - name: dante
# ensure: present
# comment: Dante Alighieri
# login: true
# superuser: false
# inRoles:
# - pg_monitor
# - pg_signal_backend
# -- Enable default monitoring and alert rules
monitoring:
# -- Whether to enable monitoring
enabled: false
podMonitor:
# -- Whether to enable the PodMonitor
enabled: true
# --The list of relabelings for the PodMonitor.
# Applied to samples before scraping.
relabelings: []
# -- The list of metric relabelings for the PodMonitor.
# Applied to samples before ingestion.
metricRelabelings: []
prometheusRule:
# -- Whether to enable the PrometheusRule automated alerts
enabled: false
# -- Exclude specified rules
excludeRules: []
# -- Whether the default queries should be injected.
# Set it to true if you don't want to inject default queries into the cluster.
disableDefaultQueries: false
# -- Custom Prometheus metrics
# Will be stored in the ConfigMap
customQueries: []
# - name: "pg_cache_hit_ratio"
# query: "SELECT current_database() as datname, sum(heap_blks_hit) / (sum(heap_blks_hit) + sum(heap_blks_read)) as ratio FROM pg_statio_user_tables;"
# metrics:
# - datname:
# usage: "LABEL"
# description: "Name of the database"
# - ratio:
# usage: GAUGE
# description: "Cache hit ratio"
# -- The list of secrets containing the custom queries
customQueriesSecret: []
# - name: custom-queries-secret
# key: custom-queries
# -- Parameters to be set for the database itself
# See: https://cloudnative-pg.io/documentation/current/cloudnative-pg.v1/#postgresql-cnpg-io-v1-PostgresConfiguration
postgresql:
# -- PostgreSQL configuration options (postgresql.conf)
parameters:
shared_buffers: 128MB
max_slot_wal_keep_size: 2000MB
hot_standby_feedback: "on"
# -- Quorum-based Synchronous Replication
synchronous: {}
# method: any
# number: 1
# -- PostgreSQL Host Based Authentication rules (lines to be appended to the pg_hba.conf file)
pg_hba: []
# - host all all 10.244.0.0/16 md5
# -- PostgreSQL User Name Maps rules (lines to be appended to the pg_ident.conf file)
pg_ident: []
# - mymap /^(.*)@mydomain\.com$ \1
# -- Lists of shared preload libraries to add to the default ones
shared_preload_libraries: []
# - pgaudit
# -- PostgreSQL LDAP configuration (see https://cloudnative-pg.io/documentation/current/postgresql_conf/#ldap-configuration)
ldap: {}
# https://cloudnative-pg.io/documentation/current/postgresql_conf/#ldap-configuration
# server: 'openldap.default.svc.cluster.local'
# bindSearchAuth:
# baseDN: 'ou=org,dc=example,dc=com'
# bindDN: 'cn=admin,dc=example,dc=com'
# bindPassword:
# name: 'ldapBindPassword'
# key: 'data'
# searchAttribute: 'uid'
# -- Bootstrap is the configuration of the bootstrap process when initdb is used.
# See: https://cloudnative-pg.io/documentation/current/bootstrap/
# See: https://cloudnative-pg.io/documentation/current/cloudnative-pg.v1/#postgresql-cnpg-io-v1-bootstrapinitdb
initdb: {}
# database: app
# owner: "" # Defaults to the database name
# secret:
# name: "" # Name of the secret containing the initial credentials for the owner of the user database. If empty a new secret will be created from scratch
# options: []
# encoding: UTF8
# postInitSQL:
# - CREATE EXTENSION IF NOT EXISTS vector;
# postInitApplicationSQL: []
# postInitTemplateSQL: []
# -- Configure the metadata of the generated service account
serviceAccountTemplate: {}
additionalLabels: {}
annotations: {}
# -- Recovery settings when booting cluster from external cluster
recovery:
# -- Available recovery methods:
# * `backup` - Recovers a CNPG cluster from a CNPG backup (PITR supported) Needs to be on the same cluster in the same namespace.
# * `objectStore` - Recovers a CNPG cluster from a barman object store (PITR supported).
# * `pgBaseBackup` - Recovers a CNPG cluster viaa streaming replication protocol. Useful if you want to
# migrate databases to CloudNativePG, even from outside Kubernetes.
# * `import` - Import one or more databases from an existing Postgres cluster.
method: backup
# See https://cloudnative-pg.io/documentation/current/recovery/#recovery-from-a-backup-object
backup:
# -- Point in time recovery target. Specify one of the following:
pitrTarget:
# -- Time in RFC3339 format
time: ""
# -- Name of the database used by the application. Default: `app`.
database: app
# -- Name of the owner of the database in the instance to be used by applications. Defaults to the value of the `database` key.
owner: ""
# -- Name of the backup to recover from.
backupName: ""
# See https://cloudnative-pg.io/documentation/current/recovery/#recovery-from-an-object-store
objectStore:
# -- Point in time recovery target. Specify one of the following:
pitrTarget:
# -- Time in RFC3339 format
time: ""
# -- Name of the database used by the application. Default: `app`.
database: app
# -- Name of the owner of the database in the instance to be used by applications. Defaults to the value of the `database` key.
owner: ""
# -- Object store backup name
name: recovery
# -- Overrides the provider specific default path. Defaults to:
# S3: s3://<bucket><path>
# Azure: https://<storageAccount>.<serviceName>.core.windows.net/<containerName><path>
# Google: gs://<bucket><path>
destinationPath: ""
# -- Overrides the provider specific default endpoint. Defaults to:
# S3: https://s3.<region>.amazonaws.com"
# Leave empty if using the default S3 endpoint
endpointURL: "https://nyc3.digitaloceanspaces.com"
# -- Specifies a CA bundle to validate a privately signed certificate.
endpointCA:
# -- Creates a secret with the given value if true, otherwise uses an existing secret.
create: false
name: ""
key: ""
# -- Generate external cluster name, uses: {{ .Release.Name }}-postgresql-<major version>-backup-index-{{ index }}
index: 1
# -- Override the name of the backup cluster, defaults to "cluster.name"
clusterName: ""
# -- Specifies secret that contains S3 credentials, should contain the keys ACCESS_KEY_ID and ACCESS_SECRET_KEY
endpointCredentials: ""
# -- Storage
wal:
# -- WAL compression method. One of `` (for no compression), `gzip`, `bzip2` or `snappy`.
compression: snappy
# -- Whether to instruct the storage provider to encrypt WAL files. One of `` (use the storage container default), `AES256` or `aws:kms`.
encryption: ""
# -- Number of WAL files to be archived or restored in parallel.
maxParallel: 1
data:
# -- Data compression method. One of `` (for no compression), `gzip`, `bzip2` or `snappy`.
compression: snappy
# -- Whether to instruct the storage provider to encrypt data files. One of `` (use the storage container default), `AES256` or `aws:kms`.
encryption: ""
# -- Number of data files to be archived or restored in parallel.
jobs: 1
# See https://cloudnative-pg.io/documentation/current/bootstrap/#bootstrap-from-a-live-cluster-pg_basebackup
pgBaseBackup:
# -- Name of the database used by the application. Default: `app`.
database: app
# -- Name of the secret containing the initial credentials for the owner of the user database. If empty a new secret will be created from scratch
secret: ""
# -- Name of the owner of the database in the instance to be used by applications. Defaults to the value of the `database` key.
owner: ""
# -- Configuration for the source database
source:
host: ""
port: 5432
username: ""
database: "app"
sslMode: "verify-full"
passwordSecret:
# -- Whether to create a secret for the password
create: false
# -- Name of the secret containing the password
name: ""
# -- The key in the secret containing the password
key: "password"
# -- The password value to use when creating the secret
value: ""
sslKeySecret:
name: ""
key: ""
sslCertSecret:
name: ""
key: ""
sslRootCertSecret:
name: ""
key: ""
# See: https://cloudnative-pg.io/documentation/current/cloudnative-pg.v1/#postgresql-cnpg-io-v1-Import
import:
# -- One of `microservice` or `monolith.`
# See: https://cloudnative-pg.io/documentation/current/database_import/#how-it-works
type: "microservice"
# -- Databases to import
databases: []
# -- Roles to import
roles: []
# -- List of SQL queries to be executed as a superuser in the application database right after is imported.
# To be used with extreme care. Only available in microservice type.
postImportApplicationSQL: []
# -- When set to true, only the pre-data and post-data sections of pg_restore are invoked, avoiding data import.
schemaOnly: false
# -- List of custom options to pass to the `pg_dump` command. IMPORTANT: Use these options with caution and at your
# own risk, as the operator does not validate their content. Be aware that certain options may conflict with the
# operator's intended functionality or design.
pgDumpExtraOptions: []
# -- List of custom options to pass to the `pg_restore` command. IMPORTANT: Use these options with caution and at
# your own risk, as the operator does not validate their content. Be aware that certain options may conflict with the
# operator's intended functionality or design.
pgRestoreExtraOptions: []
# -- Configuration for the source database
source:
host: ""
port: 5432
username: app
database: app
sslMode: "verify-full"
passwordSecret:
# -- Whether to create a secret for the password
create: false
# -- Name of the secret containing the password
name: ""
# -- The key in the secret containing the password
key: "password"
# -- The password value to use when creating the secret
value: ""
sslKeySecret:
name: ""
key: ""
sslCertSecret:
name: ""
key: ""
sslRootCertSecret:
name: ""
key: ""
# -- Backup settings
backup:
# -- You need to configure backups manually, so backups are disabled by default.
enabled: false
# -- Method to create backups, options currently are only objectStore
method: objectStore
# -- Options for object store backups
objectStore: []
# -
# # -- Object store backup name
# name: external
# # -- Overrides the provider specific default path. Defaults to:
# # S3: s3://<bucket><path>
# # Azure: https://<storageAccount>.<serviceName>.core.windows.net/<containerName><path>
# # Google: gs://<bucket><path>
# destinationPath: ""
# # -- Overrides the provider specific default endpoint. Defaults to:
# # https://nyc3.digitaloceanspaces.com
# endpointURL: ""
# # -- Specifies a CA bundle to validate a privately signed certificate.
# endpointCA:
# # -- Creates a secret with the given value if true, otherwise uses an existing secret.
# create: false
# name: ""
# key: ""
# # -- Generate external cluster name, uses: {{ .Release.Name }}-postgresql-<major version>-backup-index-{{ index }}
# index: 1
# # -- Override the name of the backup cluster, defaults to "cluster.name"
# clusterName: ""
# # -- Specifies secret that contains S3 credentials, should contain the keys ACCESS_KEY_ID and ACCESS_SECRET_KEY
# endpointCredentials: ""
# # -- Retention policy for backups
# retentionPolicy: "30d"
# # -- Specificies if this backup will do WALs
# isWALArchiver: true
# # -- Storage
# wal:
# # -- WAL compression method. One of `` (for no compression), `gzip`, `bzip2` or `snappy`.
# compression: snappy
# # -- Whether to instruct the storage provider to encrypt WAL files. One of `` (use the storage container default), `AES256` or `aws:kms`.
# encryption: ""
# # -- Number of WAL files to be archived or restored in parallel.
# maxParallel: 1
# data:
# # -- Data compression method. One of `` (for no compression), `gzip`, `bzip2` or `snappy`.
# compression: snappy
# # -- Whether to instruct the storage provider to encrypt data files. One of `` (use the storage container default), `AES256` or `aws:kms`.
# encryption: ""
# # -- Number of data files to be archived or restored in parallel.
# jobs: 1
# -- List of scheduled backups
scheduledBackups: []
# -
# # -- Scheduled backup name
# name: daily-backup
# # -- Schedule in cron format
# schedule: "0 0 0 * * *"
# # -- Start backup on deployment
# immediate: false
# # -- Temporarily stop scheduled backups from running
# suspend: false
# # -- Backup owner reference
# backupOwnerReference: self
# # -- Backup method, can be `barman-cloud.cloudnative-pg.io` (default)
# plugin: barman-cloud.cloudnative-pg.io
# # -- Name of backup target
# backupName: external
# -- List of PgBouncer poolers
poolers: []
# -
# # -- Pooler name
# name: rw
# # -- PgBouncer type of service to forward traffic to.
# type: rw
# # -- PgBouncer pooling mode
# poolMode: transaction
# # -- Number of PgBouncer instances
# instances: 3
# # -- PgBouncer configuration parameters
# parameters:
# max_client_conn: "1000"
# default_pool_size: "25"
# monitoring:
# # -- Whether to enable monitoring
# enabled: false
# podMonitor:
# # -- Whether to enable the PodMonitor
# enabled: true
# # -- Custom PgBouncer deployment template.
# # Use to override image, specify resources, etc.
# template: {}
# -
# # -- Pooler name
# name: ro
# # -- PgBouncer type of service to forward traffic to.
# type: ro
# # -- PgBouncer pooling mode
# poolMode: transaction
# # -- Number of PgBouncer instances
# instances: 3
# # -- PgBouncer configuration parameters
# parameters:
# max_client_conn: "1000"
# default_pool_size: "25"
# monitoring:
# # -- Whether to enable monitoring
# enabled: false
# podMonitor:
# # -- Whether to enable the PodMonitor
# enabled: true
# # -- Custom PgBouncer deployment template.
# # Use to override image, specify resources, etc.
# template: {}

View File

@@ -1,14 +1,70 @@
{
"$schema": "https://docs.renovatebot.com/renovate-schema.json",
"extends": [
"config:base",
"mergeConfidence:all-badges"
"config:recommended",
"mergeConfidence:all-badges",
":rebaseStalePrs"
],
"timezone": "MST7MDT",
"schedule": "before 8am every weekday",
"ignoreTests": true,
"lockFileMaintenance": {
"enabled": true,
"automerge": true
}
"timezone": "US/Central",
"schedule": [ "* */1 * * *" ],
"labels": [],
"prHourlyLimit": 0,
"prConcurrentLimit": 0,
"packageRules": [
{
"description": "Label charts",
"matchDatasources": [
"helm"
],
"addLabels": [
"chart"
],
"automerge": false,
"bumpVersions": [
{
"filePatterns": ["{{packageFileDir}}/Chart.{yaml,yml}"],
"matchStrings": ["version:\\s(?<version>[^\\s]+)"],
"bumpType": "{{#if isPatch}}patch{{else}}minor{{/if}}"
}
]
},
{
"description": "Label images",
"matchDatasources": [
"docker"
],
"addLabels": [
"image"
],
"automerge": false,
"bumpVersions": [
{
"filePatterns": ["{{packageFileDir}}/Chart.{yaml,yml}"],
"matchStrings": ["version:\\s(?<version>[^\\s]+)"],
"bumpType": "{{#if isPatch}}patch{{else}}minor{{/if}}"
}
]
},
{
"description": "CNPG image",
"matchDepNames": [
"ghcr.io/cloudnative-pg/postgresql"
],
"matchDatasources": [
"docker"
],
"addLabels": [
"image"
],
"automerge": false,
"versioning": "deb",
"bumpVersions": [
{
"filePatterns": ["{{packageFileDir}}/Chart.{yaml,yml}"],
"matchStrings": ["version:\\s(?<version>[^\\s]+)"],
"bumpType": "{{#if isPatch}}patch{{else}}minor{{/if}}"
}
]
}
]
}