The Ops Community

Joseph D. Marhee
Joseph D. Marhee

Posted on

Bootstrapping Tailscale-enabled RKE Nodes

I often use Rancher Kubernetes Engine (RKE) to provision Kubernetes clusters, and it uses a very simple manifest format that can be as straightforward as:

kubernetes_version: "v1.21.7-rancher1-1"

nodes:
  - address: 1.2.3.4
    internal_address: 10.2.3.4
    user: root
    role: [controlplane,worker,etcd]
  - address: 1.2.3.5
    internal_address: 10.2.3.5
    user: root
    role: [controlplane,worker,etcd]
  - address: 1.2.3.6
    internal_address: 10.2.3.6
    user: root
    role: [controlplane,worker,etcd]

services:
  etcd:
    snapshot: true
    creation: 6h
    retention: 24h
  kube-api:
    audit_log:
      enabled: true

ingress:
  provider: nginx
  options:
    use-forwarded-headers: "true"
    server-alias: "1.2.3.4.nip.io"
Enter fullscreen mode Exit fullscreen mode

to create an RKE cluster with an ingress configuration.

I also like to use Tailscale on these cluster nodes to connect these cluster nodes to other services in my environment when I run these machines in various cloud providers.

You can do all of this using a number of automation tools, but for the purposes of testing, or quickly bringing a cluster up, I have a few utility scripts I like to use.

For example, on Scaleway, I might use a script like the following:

#!/bin/bash

CLUSTER_NAME=$1
RANGE=$2

for i in {0..$RANGE}; do
    scw instance server create \
    type=DEV1-S zone=fr-par-1 \
    image=ubuntu_focal root-volume=l:20G \
    name=scw-$CLUSTER_NAME-0$i \
    ip=new \
    project-id=${SCW_PROJECT_ID} | grep PublicIP.Address
done
Enter fullscreen mode Exit fullscreen mode

to create machines, and that output is a public IP address of the newly created server. I either store these addresses, and pull them out later using Scaleway's CLI, or more often will just pipe it directly to another script that bootstraps Docker, Tailscale, and provides me with the node block for my RKE config:

#!/bin/bash

ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -q root@$1 "curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/focal.gpg | sudo apt-key add -"
ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -q root@$1 "curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/focal.list | sudo tee /etc/apt/sources.list.d/tailscale.list"
ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -q root@$1 "apt update; apt install -y tailscale docker.io; systemctl enable --now docker"
ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -q root@$1 "tailscale up -authkey ${TAILSCALE_HOST_KEY}"
#ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -q root@$1 "tailscale up --accept-routes"
ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -q root@$1 "sysctl -w net.ipv4.ip_forward=1"
TAILSCALE_IP=$(ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -q root@$1 "tailscale ip --4")
#cfcli add $2 $1 -t A

echo "
  - address: $TAILSCALE_IP
    internal_address: $1
    user: root
    role: [controlplane,worker,etcd]
"
Enter fullscreen mode Exit fullscreen mode

which would be run using:

TAILSCALE_HOST_KEY=$TAILSCALE_HOST_KEY ./bootstrap.sh "${HOST_ADDRESS}"
Enter fullscreen mode Exit fullscreen mode

which that output node block would then go under the nodes key in your above RKE config, so when the time comes and you run rke up --config rke.config, your node addresses will be bound to your Tailscale network.

Discussion (0)