The Ops Community ⚙️

Joseph D. Marhee
Joseph D. Marhee

Posted on • Updated on

High Availability RKE2 with Kube VIP

RKE2 can be deployed in a multi-control plane, high availability configuration, and we can use kube-vip to provide virtual IP functionality to the cluster to further support endpoint availability in the event of the cluster losing a control plane node.

Let's start by defining a few variables ahead of time. What we'll need are the addresses of your three RKE2 nodes, for example, mine were:

NODE_ADDRESS_0=192.168.0.197
NODE_ADDRESS_1=192.168.0.198
NODE_ADDRESS_2=192.168.0.199
Enter fullscreen mode Exit fullscreen mode

and then the address you intend to use for your virtual IP, and the name of the interface on the node(s) it'll be advertised on (this should be standard across the nodes, but most importantly on the first control plane node when we we install the DaemonSet):

INTERFACE=enp1s0 # confirm this in the output of `ip addr`
KUBE_VIP_ADDRESS=192.168.0.200
Enter fullscreen mode Exit fullscreen mode

In my case, my nodes are assigned addresses via DHCP, so I just selected the next address to be provisioned (kube-vip relies on ARP to determine the leader in the cluster which at the time of installation will only be the first node in your cluster), and then other options like the RKE2 cluster token, the hostname for your VIP, and the kube-vip Version:

CLUSTER_TOKEN=$(tr -dc A-Za-z0-9 </dev/urandom | head -c 13; echo)
KUBE_VIP_VERSION=v0.8.0
HOSTNAME=rke2.somequant.club
Enter fullscreen mode Exit fullscreen mode

We need to do some preparation on each of the hosts:

Creating a few directories RKE2 will need:

for i in {NODE_ADDRESS_0,NODE_ADDRESS_1,NODE_ADDRESS_2}; do ssh $USER@$i "sudo mkdir -p /etc/rancher/rke2; sudo mkdir -p /var/lib/rancher/rke2/server/manifests/"; done
Enter fullscreen mode Exit fullscreen mode

if your HOSTNAME variable is not a FQDN or resolvable with DNS, you might want to do another loop to add an /etc/hosts entry as well:

for i in {NODE_ADDRESS_0,NODE_ADDRESS_1,NODE_ADDRESS_2}; do ssh $USER@$i "echo '$KUBE_VIP_ADDRESS $HOSTNAME' | sudo tee -a /etc/hosts"; done
Enter fullscreen mode Exit fullscreen mode

Then, prepare the RKE2 config file for the first control plane node:

cat <<EOF | tee rke2-prime-config.yaml
token: ${CLUSTER_TOKEN}
tls-san:
- ${HOSTNAME}
- ${KUBE_VIP_ADDRESS}
- ${NODE_ADDRESS_0}
- ${NODE_ADDRESS_1}
- ${NODE_ADDRESS_2}
write-kubeconfig-mode: 600
etcd-expose-metrics: true
cni:
- cilium
disable:
- rke2-ingress-nginx
- rke2-snapshot-validation-webhook
- rke2-snapshot-controller
- rke2-snapshot-controller-crd
EOF
Enter fullscreen mode Exit fullscreen mode

then create (and set aside for now) the config for the other two control plane nodes:

cat <<EOF | tee rke2-peer-config.yaml
server: https://${KUBE_VIP_ADDRESS}:9345
token: ${CLUSTER_TOKEN}
tls-san:
- ${HOSTNAME}
- ${KUBE_VIP_ADDRESS}
- ${NODE_ADDRESS_0}
- ${NODE_ADDRESS_1}
- ${NODE_ADDRESS_2}
write-kubeconfig-mode: 600
etcd-expose-metrics: true
cni:
- cilium
disable:
- rke2-ingress-nginx
- rke2-snapshot-validation-webhook
- rke2-snapshot-controller
- rke2-snapshot-controller-crd
EOF
Enter fullscreen mode Exit fullscreen mode

Transfer rke2-prime-config.yaml to NODE_ADDRESS_0:

scp rke2-prime-config.yaml $USER@$NODE_ADDRESS_0:/etc/rancher/rke2/config.yaml
Enter fullscreen mode Exit fullscreen mode

Now, login to NODE_ADDRESS_0, and we will install and enable RKE2:

curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE=server sh -

systemctl enable rke2-server.service
systemctl start rke2-server.service
Enter fullscreen mode Exit fullscreen mode

Once this step completes, wait until the node is in a Ready state in this output:

export PATH=$PATH:/var/lib/rancher/rke2/bin
export KUBECONFIG=/etc/rancher/rke2/rke2.yaml
kubectl get nodes -w
Enter fullscreen mode Exit fullscreen mode

You are now ready to install kube-vip. From the above, set your INTERFACE and KUBE_VIP_ADDRESS variables on the host as well as the KUBE_VIP_VERSION.

Now, we can proceed to create our kube-vip manifests:

kubectl apply -f https://kube-vip.io/manifests/rbac.yaml

alias kube-vip="ctr image pull ghcr.io/kube-vip/kube-vip:$KUBE_VIP_VERSION; ctr run --rm --net-host ghcr.io/kube-vip/kube-vip:$KUBE_VIP_VERSION vip /kube-vip"

kube-vip manifest daemonset \
    --interface $INTERFACE \
    --address $KUBE_VIP_ADDRESS \
    --inCluster \
    --taint \
    --controlplane \
    --services \
    --arp \
    --leaderElection
Enter fullscreen mode Exit fullscreen mode

This latter command will output a YAML manifest, which will not be executed; you can, either, redirect the output to a file (i.e. | tee -a kube-vip.yaml and kubectl apply -f as you might normally) or, as RKE2 allows you to do, redirect it to /var/lib/rancher/rke2/server/manifests/kube-vip.yaml where it will be applied automatically.

Once applied, you can confirm the status of the kube-vip DaemonSet:

kubectl get ds -n kube-system kube-vip-ds

root@fed7534656e3:~# kubectl get ds -n kube-system kube-vip-ds
NAME          DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
kube-vip-ds   1         1         1       1            1           <none>          10s
Enter fullscreen mode Exit fullscreen mode

At which point, you should be able to ping, both, your VIP and the hostname (if you added the hosts entry, or the DNS can be resolved, this was optional):

root@fed7534656e3:~# ping rke2.somequant.club
PING rke2-vip.freeipad.internal (192.168.0.200) 56(84) bytes of data.
64 bytes from rke2.somequant.club (192.168.0.200): icmp_seq=1 ttl=64 time=0.079 ms
64 bytes from rke2.somequant.club (192.168.0.200): icmp_seq=2 ttl=64 time=0.068 ms
64 bytes from rke2.somequant.club (192.168.0.200): icmp_seq=3 ttl=64 time=0.083 ms
64 bytes from rke2.somequant.club (192.168.0.200): icmp_seq=4 ttl=64 time=0.055 ms
Enter fullscreen mode Exit fullscreen mode

Now, we can proceed to joining our other two control plane nodes.

Copy your rke2-peer-config.yaml to NODE_ADDRESS_1 and 2:

for i in {NODE_ADDRESS_1,NODE_ADDRESS_2}; do scp rke2-peer-config.yaml $USER@$i:/etc/rancher/rke2/config.yaml ; done
Enter fullscreen mode Exit fullscreen mode

Then proceed to install and enable RKE2 as you did on the first node:

for i in {NODE_ADDRESS_1,NODE_ADDRESS_2}; do ssh $USER@$i "curl -sfL https://get.rke2.io | sudo INSTALL_RKE2_TYPE=server sh -"; done

for i in {NODE_ADDRESS_1,NODE_ADDRESS_2}; do ssh $USER@$i "sudo systemctl enable rke2-server.service; sudo systemctl start rke2-server.service"; done
Enter fullscreen mode Exit fullscreen mode

At which point, you can log back into the first node, and monitor the progress of the newly joining nodes, as they will be using the new VIP (rather than the node address of the first node) to join the cluster (so if the first node goes down, either one (or new nodes) can still be added to the cluster, and worker nodes can remain in contact with the API):

export PATH=$PATH:/var/lib/rancher/rke2/bin
export KUBECONFIG=/etc/rancher/rke2/rke2.yaml
kubectl get nodes -w

kubectl get nodes -o wide
NAME           STATUS   ROLES                       AGE     VERSION           INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION       CONTAINER-RUNTIME
6933af500526   Ready    control-plane,etcd,master   79s     v1.27.12+rke2r1   192.168.0.198   <none>        Ubuntu 22.04.4 LTS   5.15.0-102-generic   containerd://1.7.11-k3s2
8986366dd7e8   Ready    control-plane,etcd,master   34s     v1.27.12+rke2r1   192.168.0.199   <none>        Ubuntu 22.04.4 LTS   5.15.0-102-generic   containerd://1.7.11-k3s2
fed7534656e3   Ready    control-plane,etcd,master   8m51s   v1.27.12+rke2r1   192.168.0.197   <none>        Ubuntu 22.04.4 LTS   5.15.0-102-generic   containerd://1.7.11-k3s2
Enter fullscreen mode Exit fullscreen mode

When you login to the node that is currently the leader in the most recent election, you will also see that the INTERFACE will have the KUBE_VIP_ADDRESS bound as well:

ubuntu@fed7534656e3:~$ ip addr show enp1s0 | grep inet
    inet 192.168.0.197/24 metric 100 brd 192.168.0.255 scope global dynamic enp1s0
    inet 192.168.0.200/32 scope global enp1s0
Enter fullscreen mode Exit fullscreen mode

More on Kube-VIP and its features can be found here

Top comments (0)