feat: add content
This commit is contained in:
75
src/content/docs/guides/talos-upgrade.md
Normal file
75
src/content/docs/guides/talos-upgrade.md
Normal file
@@ -0,0 +1,75 @@
|
||||
---
|
||||
title: Talos Upgrade
|
||||
description: Steps followed for the v1.12.0 upgrade process
|
||||
---
|
||||
|
||||
The upgrade to this version was more extension as there have been migrations to using configuration documents. This required rewriting the configuration document to a series of patches and provide a deterministic generation command for the different host types. In addition there was also a change to storage layout to separate ceph, local-path, and ephemeral storage on the NUC hosts.
|
||||
|
||||
## Preparation
|
||||
|
||||
The NUC hosts are to be wiped since because of the storage reconfiguration. For the RPIs only the first command with proper image was needed. The new configuration format could be applied later. Both the node and the disks have to be removed.
|
||||
|
||||
The following command is used to upgrade the image. This was first to ensure the boot after wipe would still be v1.12.0 to use the updated configuration documents.
|
||||
```bash
|
||||
talosctl upgrade --nodes 10.232.1.23 --image factory.talos.dev/metal-installer/495176274ce8f9e87ed052dbc285c67b2a0ed7c5a6212f5c4d086e1a9a1cf614:v1.12.0
|
||||
```
|
||||
|
||||
Wipe command.
|
||||
```bash
|
||||
talosctl reset --system-labels-to-wipe EPHEMERAL,STATE --reboot -n 10.232.1.23
|
||||
```
|
||||
|
||||
## Remove old references
|
||||
|
||||
Remove the node from the cluster.
|
||||
```bash
|
||||
kubectl delete node talos-9vs-6hh
|
||||
```
|
||||
|
||||
Exec into the rook-ceph-tools container in order to remove the disk from the cluster.
|
||||
```bash
|
||||
kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items\[\*].metadata.name}') -- bash
|
||||
```
|
||||
|
||||
Inside the rook-ceph-tools container remove the OSD/disk and the node:
|
||||
```bash
|
||||
ceph osd tree
|
||||
```
|
||||
```bash
|
||||
ceph osd out 0
|
||||
```
|
||||
```bash
|
||||
ceph osd purge 0 --yes-i-really-mean-it
|
||||
```
|
||||
```bash
|
||||
ceph osd crush remove talos-9vs-6hh
|
||||
```
|
||||
|
||||
# Apply new configuration
|
||||
|
||||
The wiped node should now be in maintenance mode and ready to be configured. Use the generate command in the README of the talos-config repo to make the configuration to be supplied.
|
||||
```bash
|
||||
talosctl apply-config -f generated/worker-nuc.yaml -n 10.232.1.23 --insecure
|
||||
```
|
||||
|
||||
Add the required labels if Talos does not add them:
|
||||
```yaml
|
||||
node-role.kubernetes.io/bgp: '65020'
|
||||
node-role.kubernetes.io/local-storage-node: local-storage-node
|
||||
node-role.kubernetes.io/rook-osd-node: rook-osd-node
|
||||
```
|
||||
|
||||
## Verification
|
||||
|
||||
Verify the disks have been created:
|
||||
```bash
|
||||
talosctl get disks -n 10.232.1.23
|
||||
```
|
||||
|
||||
Verify the mount paths and volumes are created:
|
||||
```bash
|
||||
talosctl -n 10.232.1.23 ls /var/mnt
|
||||
```
|
||||
```bash
|
||||
talosctl -n 10.232.1.23 get volumestatuses
|
||||
```
|
||||
Reference in New Issue
Block a user