<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>The Ops Community ⚙️: J. Marhee</title>
    <description>The latest articles on The Ops Community ⚙️ by J. Marhee (@jmarhee).</description>
    <link>https://community.ops.io/jmarhee</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://community.ops.io/feed/jmarhee"/>
    <language>en</language>
    <item>
      <title>Caddy Ingress Controller with Rancher-managed RKE2/K3s</title>
      <dc:creator>J. Marhee</dc:creator>
      <pubDate>Tue, 25 Mar 2025 20:16:32 +0000</pubDate>
      <link>https://community.ops.io/jmarhee/caddy-ingress-controller-with-rancher-managed-rke2k3s-1kpd</link>
      <guid>https://community.ops.io/jmarhee/caddy-ingress-controller-with-rancher-managed-rke2k3s-1kpd</guid>
      <description>&lt;p&gt;The default Ingress controller for RKE2 and K3s are Nginx and Traefik, respectively, but both can have this overridden; for example, K3s can be deployed using &lt;a href="https://www.suse.com/support/kb/doc/?id=000020082" rel="noopener noreferrer"&gt;Nginx in place of Traefik, using the K3s Helm Controller&lt;/a&gt;, and likewise for Ingress controllers like the one provided for &lt;a href="https://blog.9ssi7.dev/why-choose-caddy-server-over-nginx-e49b01c631a1" rel="noopener noreferrer"&gt;Caddy&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;To do this when deploying clusters via Rancher Manager, there are two steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Disable the default Ingress in the &lt;code&gt;Cluster&lt;/code&gt; specification.&lt;/li&gt;
&lt;li&gt;Add an &lt;a href="https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/rancher-server-configuration/rke2-cluster-configuration#additionalmanifest" rel="noopener noreferrer"&gt;additional manifest&lt;/a&gt; to the &lt;code&gt;spec.rkeConfig&lt;/code&gt; to add a Helm Chart for the Caddy Ingress.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For the first step, either uncheck the box for Nginx (or Traefik for K3s) Ingress in your Rancher UI:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/oLs2ZOrHrRxUT5E1suI4BHX6lInQyVLL-QPuYegRQJY/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzLzV0ZHhi/ZTR5NzJxa3ZlZXlk/aTRwLnBuZw" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/oLs2ZOrHrRxUT5E1suI4BHX6lInQyVLL-QPuYegRQJY/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzLzV0ZHhi/ZTR5NzJxa3ZlZXlk/aTRwLnBuZw" alt=" " width="800" height="267"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Or, in the cluster yaml, ensure this line is present:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;    &lt;span class="na"&gt;machineGlobalConfig&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;cni&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;calico&lt;/span&gt;
      &lt;span class="na"&gt;disable&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;rke2-ingress-nginx&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;in your machineGlobalConfig. &lt;/p&gt;

&lt;p&gt;For step 2, add the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Namespace&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;caddy-system&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;helm.cattle.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HelmChart&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;caddy-ingress-controller&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kube-system&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;repo&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://caddyserver.github.io/ingress&lt;/span&gt;
  &lt;span class="na"&gt;chart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;caddy-ingress-controller&lt;/span&gt;
  &lt;span class="na"&gt;targetNamespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;caddy-system&lt;/span&gt;
&lt;span class="c1"&gt;#  valuesContent: |&lt;/span&gt;
&lt;span class="c1"&gt;#    minikube: true #hostNet for testing, otherwise use load balancer&lt;/span&gt;
&lt;span class="c1"&gt;#    loadBalancer:&lt;/span&gt;
&lt;span class="c1"&gt;#       enabled: false&lt;/span&gt;
&lt;span class="c1"&gt;#    automaticHTTPS: your@email.com &lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;to, either, the UI field for additional manifests:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/MhitN99lcb8yMWjUtgcw4qBtkUnAVrC-Whdse3xM8Mo/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL3VqMzIy/cmpwODMwMmMwbWJj/bXVyLnBuZw" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/MhitN99lcb8yMWjUtgcw4qBtkUnAVrC-Whdse3xM8Mo/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL3VqMzIy/cmpwODMwMmMwbWJj/bXVyLnBuZw" alt=" " width="800" height="303"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;or to &lt;code&gt;.spec.rkeConfig&lt;/code&gt; in the yaml editor:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="na"&gt;rkeConfig&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;additionalManifest&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|-&lt;/span&gt;
      &lt;span class="s"&gt;---&lt;/span&gt;
      &lt;span class="s"&gt;apiVersion: v1&lt;/span&gt;
      &lt;span class="s"&gt;kind: Namespace&lt;/span&gt;
      &lt;span class="s"&gt;metadata:&lt;/span&gt;
        &lt;span class="s"&gt;name: caddy-system&lt;/span&gt;
      &lt;span class="s"&gt;---&lt;/span&gt;
      &lt;span class="s"&gt;apiVersion: helm.cattle.io/v1&lt;/span&gt;
      &lt;span class="s"&gt;kind: HelmChart&lt;/span&gt;
      &lt;span class="s"&gt;metadata:&lt;/span&gt;
        &lt;span class="s"&gt;name: caddy-ingress-controller&lt;/span&gt;
        &lt;span class="s"&gt;namespace: kube-system&lt;/span&gt;
      &lt;span class="s"&gt;spec:&lt;/span&gt;
        &lt;span class="s"&gt;repo: https://caddyserver.github.io/ingress&lt;/span&gt;
        &lt;span class="s"&gt;chart: caddy-ingress-controller&lt;/span&gt;
        &lt;span class="s"&gt;targetNamespace: caddy-system&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once your cluster is done provisioning, you can verify that Caddy is running as your default Ingress:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/sHwY-3NbNfKZpATFYUVPpldXzLHElHp_g8GL0yDEPS4/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL293Z3I1/NW1lMXpjc2g4cjA5/enNjLnBuZw" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/sHwY-3NbNfKZpATFYUVPpldXzLHElHp_g8GL0yDEPS4/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL293Z3I1/NW1lMXpjc2g4cjA5/enNjLnBuZw" alt=" " width="800" height="173"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>caddy</category>
      <category>networking</category>
      <category>rancher</category>
    </item>
    <item>
      <title>Running Calico eBPF Dataplane kube-proxy replacement on RKE2</title>
      <dc:creator>J. Marhee</dc:creator>
      <pubDate>Fri, 19 Jul 2024 22:17:00 +0000</pubDate>
      <link>https://community.ops.io/jmarhee/running-calico-ebpf-dataplane-kube-proxy-replacement-on-rke2-1c02</link>
      <guid>https://community.ops.io/jmarhee/running-calico-ebpf-dataplane-kube-proxy-replacement-on-rke2-1c02</guid>
      <description>&lt;p&gt;Tigera outlines the following benefits to enabling the &lt;a href="https://docs.tigera.io/calico/latest/operations/ebpf/enabling-ebpf#big-picture" rel="noopener noreferrer"&gt;eBPF dataplane&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The eBPF dataplane mode has several advantages over standard Linux networking pipeline mode:&lt;br&gt;
It scales to higher throughput.&lt;br&gt;
It uses less CPU per GBit.&lt;br&gt;
It has native support for Kubernetes services (without needing kube-proxy) that:&lt;br&gt;
Reduces first packet latency for packets to services.&lt;br&gt;
Preserves external client source IP addresses all the way to the pod.&lt;br&gt;
Supports DSR (Direct Server Return) for more efficient service routing.&lt;br&gt;
Uses less CPU than kube-proxy to keep the dataplane in sync.&lt;br&gt;
To learn more and see performance metrics from our test environment, see the blog, Introducing the Calico eBPF dataplane.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;As I've written about before, &lt;a href="https://community.ops.io/jmarhee/running-cilium-on-rancher-deployed-rke2-without-kube-proxy-652"&gt;RKE2 can be deploy with the kube-proxy replacement for Cilium&lt;/a&gt;, and it can be enabled on RKE2 after installation as well with a couple of additional changes to your Rancher cluster config.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt; For existing clusters, you can proceed immediately to the Calico patching process and skip the following about RKE2 provisioning and logging into your control plane. You can apply the following &lt;code&gt;ConfigMap&lt;/code&gt; (below) through Rancher or via &lt;code&gt;kubectl&lt;/code&gt;, etc. as normal.&lt;/p&gt;

&lt;p&gt;In Rancher, when you provision your cluster, ensure that &lt;code&gt;disable-kube-proxy&lt;/code&gt; is set to &lt;code&gt;true&lt;/code&gt; when your &lt;code&gt;cni&lt;/code&gt; is set to &lt;code&gt;calico&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/ZnLCgkwvwPHo3WBggwnVPO8C3V3dnrFIB1Wg7yQijYc/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzLzIyZGl4/NW9qdzZ3ZjVyd3pu/NHV2LnBuZw" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/ZnLCgkwvwPHo3WBggwnVPO8C3V3dnrFIB1Wg7yQijYc/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzLzIyZGl4/NW9qdzZ3ZjVyd3pu/NHV2LnBuZw" alt=" " width="442" height="124"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Keep in mind that this will prevent the cluster from reading "Ready" in the UI until you complete this process as Calico's readiness probe will not signal to Rancher it is complete until we are done.&lt;/p&gt;

&lt;p&gt;On the first Control Plane node, set your &lt;code&gt;PATH&lt;/code&gt; and &lt;code&gt;KUBECONFIG&lt;/code&gt; variables:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;m-3c90900d-c758-459e-b010-e344d7668e48:~ # export PATH=$PATH:/var/lib/rancher/rke2/bin
m-3c90900d-c758-459e-b010-e344d7668e48:~ # export KUBECONFIG=/etc/rancher/rke2/rke2.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and apply the following &lt;code&gt;ConfigMap&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConfigMap&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubernetes-services-endpoint&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tigera-operator&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;KUBERNETES_SERVICE_HOST&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;${KUBERNETES_VIP}'&lt;/span&gt;
  &lt;span class="na"&gt;KUBERNETES_SERVICE_PORT&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;6443'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;at which point, the Calico operator will restart the Pods:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;watch kubectl get pods -n calico-system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and you can proceed to patch the Operator on this node to &lt;a href="https://docs.tigera.io/calico/latest/operations/ebpf/enabling-ebpf#enable-ebpf-mode" rel="noopener noreferrer"&gt;enable the eBPF dataplane&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  kubectl patch installation.operator.tigera.io default --type merge -p '{"spec":{"calicoNetwork":{"linuxDataplane":"BPF"}}}'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At which point in Rancher, you will see the &lt;code&gt;calico&lt;/code&gt; readiness probe for this node clear, and begin to process for the remaining nodes as Calico had now become ready (and no longer expecting kube-proxy to come online).&lt;/p&gt;

&lt;p&gt;More on the Calico eBPF dataplane can be found &lt;a href="https://www.tigera.io/blog/introducing-the-calico-ebpf-dataplane/" rel="noopener noreferrer"&gt;here&lt;/a&gt;. &lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
    </item>
    <item>
      <title>Running Cilium on Rancher-deployed RKE2 without kube-proxy</title>
      <dc:creator>J. Marhee</dc:creator>
      <pubDate>Fri, 17 May 2024 22:39:00 +0000</pubDate>
      <link>https://community.ops.io/jmarhee/running-cilium-on-rancher-deployed-rke2-without-kube-proxy-652</link>
      <guid>https://community.ops.io/jmarhee/running-cilium-on-rancher-deployed-rke2-without-kube-proxy-652</guid>
      <description>&lt;p&gt;In Linux Kernel 5.8 and above, RKE2 can be run using &lt;a href="https://docs.cilium.io/en/stable/network/kubernetes/kubeproxy-free/" rel="noopener noreferrer"&gt;Cilium without the use of kube-proxy&lt;/a&gt; and deploying in RKE2 &lt;a href="https://docs.rke2.io/networking/basic_network_options" rel="noopener noreferrer"&gt;only requires a simple change to a HelmChart partial&lt;/a&gt;. For Rancher-managed RKE2 clusters, the commensurate change is similar. &lt;/p&gt;

&lt;p&gt;Ensure your CNI is set to Cilium:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/Lp0PrgZ_XnOQlZfifBUXKawQoQuMdWi6BfL_ouq6uv4/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL3FzcnM4/eHVzMjM2aGg5Znpz/eDVuLnBuZw" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/Lp0PrgZ_XnOQlZfifBUXKawQoQuMdWi6BfL_ouq6uv4/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL3FzcnM4/eHVzMjM2aGg5Znpz/eDVuLnBuZw" alt=" " width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Switch to the YAML edit view, and navigate down to the rke2-cilium chartValues key:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/mviarD5DdlcTWBfZC-whSD0uRrcf1FXGD8LF3-lDJ_g/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL2d1Y2xq/dDBrMjFuZnZ0cHB0/dGthLnBuZw" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/mviarD5DdlcTWBfZC-whSD0uRrcf1FXGD8LF3-lDJ_g/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL2d1Y2xq/dDBrMjFuZnZ0cHB0/dGthLnBuZw" alt=" " width="800" height="210"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;and save, and your cluster will begin creating.&lt;/p&gt;

&lt;p&gt;Once the cluster is online, you can validate the status of the kube-proxy replacement using the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system &lt;span class="nb"&gt;exec &lt;/span&gt;ds/cilium &lt;span class="nt"&gt;--&lt;/span&gt; cilium-dbg status | &lt;span class="nb"&gt;grep &lt;/span&gt;KubeProxyReplacement
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://community.ops.io/images/qimox56aqupD3_o85VuTMnUXQPNNJZI9UkcQqJSiq7s/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL2NzaWVw/eHh1c2J0dDJyNWVo/bGUyLnBuZw" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/qimox56aqupD3_o85VuTMnUXQPNNJZI9UkcQqJSiq7s/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL2NzaWVw/eHh1c2J0dDJyNWVo/bGUyLnBuZw" alt=" " width="800" height="137"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;More about this feature, and further what can be done with Cilium can be found &lt;a href="https://docs.cilium.io/en/stable/network/kubernetes/kubeproxy-free/" rel="noopener noreferrer"&gt;here&lt;/a&gt;. &lt;/p&gt;

</description>
    </item>
    <item>
      <title>Using Git for a Helm Chart Repo</title>
      <dc:creator>J. Marhee</dc:creator>
      <pubDate>Thu, 09 May 2024 20:59:08 +0000</pubDate>
      <link>https://community.ops.io/jmarhee/using-git-for-a-helm-chart-repo-1df6</link>
      <guid>https://community.ops.io/jmarhee/using-git-for-a-helm-chart-repo-1df6</guid>
      <description>&lt;p&gt;Git Repos can be used to store Helm Charts, and can be imported by the Helm CLI to install charts to your Kubernetes cluster. My repo of charts for this example can be found &lt;a href="https://github.com/jmarhee/charts/tree/main" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Assume you create a repo with the following structure, a repo named &lt;code&gt;charts&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; ~/repos/charts (main) $ ls
assets      charts  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;where &lt;code&gt;charts&lt;/code&gt; subdirectory contains all of your Helm chart templates like you might normally normally expect, and &lt;code&gt;assets&lt;/code&gt; where you are going to store the Helm &lt;code&gt;tar.gz&lt;/code&gt; packages we will create that the Helm CLI will actually pull and install.&lt;/p&gt;

&lt;p&gt;In my case, my repo contains the following charts:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/repos/charts (main) $ ls charts
sysdepot-authoritative-dns  sysdepot-logging
sysdepot-bitbucket      sysdepot-postgres
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and I want to publish a chart for &lt;code&gt;sysdepot-logging&lt;/code&gt;, so I will package it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm package charts/sysdepot-logging -d assets/sysdepot-logging
Successfully packaged chart and saved it to: charts/assets/sysdepot-logging-0.1.0.tgz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;but before committing this archive and use my Git repo with Helm, I have to create an index so Helm can actually know where in my Git repo to find the chart bundle:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo index .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;which creates an &lt;code&gt;index.yaml&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
entries:
  sysdepot-logging:
  - apiVersion: v2
    appVersion: 1.16.0
    created: "2024-05-09T10:08:39.866741-05:00"
    description: A Helm chart for Kubernetes
    digest: ac545bd91ad481795556a331b15bd4871e598d1e89d0c54ed338ab9afb2b8771
    name: sysdepot-logging
    type: application
    urls:
    - assets/sysdepot-logging-0.1.0.tgz
    version: 0.1.0
generated: "2024-05-09T10:08:39.866166-05:00"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;in the root of my repo that points to the archive in my assets directory. &lt;/p&gt;

&lt;p&gt;So, with that created, I commit my charts, and the archives, and the index file to my git repo as I would with any git repository (&lt;code&gt;git add&lt;/code&gt;, &lt;code&gt;commit&lt;/code&gt;, etc.)&lt;/p&gt;

&lt;p&gt;For a Github repo, for example, you can add it to Helm using the following format:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add jmarhee https://raw.githubusercontent.com/{your_username}/{your_repo}/{target_branch}/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;then after &lt;code&gt;helm repo update&lt;/code&gt;, you can install charts from your new repo. &lt;/p&gt;

</description>
      <category>helm</category>
      <category>git</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>High Availability RKE2 with kube-vip</title>
      <dc:creator>J. Marhee</dc:creator>
      <pubDate>Tue, 30 Apr 2024 00:15:11 +0000</pubDate>
      <link>https://community.ops.io/jmarhee/high-availability-rke2-with-kube-vip-4m3p</link>
      <guid>https://community.ops.io/jmarhee/high-availability-rke2-with-kube-vip-4m3p</guid>
      <description>&lt;p&gt;&lt;a href="//rke2.io"&gt;RKE2&lt;/a&gt; can be deployed in a multi-control plane, high  availability configuration, and we can use &lt;a href="https://kube-vip.io/" rel="noopener noreferrer"&gt;kube-vip&lt;/a&gt; to provide virtual IP functionality to the cluster to further support endpoint availability in the event of the cluster losing a control plane node.&lt;/p&gt;

&lt;p&gt;Let's start by defining a few variables ahead of time. What we'll need are the addresses of your three RKE2 nodes, for example, mine were:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NODE_ADDRESS_0=192.168.0.197
NODE_ADDRESS_1=192.168.0.198
NODE_ADDRESS_2=192.168.0.199
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and then the address you intend to use for your virtual IP, and the name of the interface on the node(s) it'll be advertised on (this should be standard across the nodes, but most importantly on the first control plane node when we we install the DaemonSet):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;INTERFACE=enp1s0 # confirm this in the output of `ip addr`
KUBE_VIP_ADDRESS=192.168.0.200
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In my case, my nodes are assigned addresses via DHCP, so I just selected the next address to be provisioned (kube-vip relies on &lt;a href="https://kube-vip.io/docs/modes/arp/" rel="noopener noreferrer"&gt;ARP&lt;/a&gt; to determine the leader in the cluster which at the time of installation will only be the first node in your cluster), and then other options like the RKE2 cluster token, the hostname for your VIP, and the kube-vip Version:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CLUSTER_TOKEN=$(tr -dc A-Za-z0-9 &amp;lt;/dev/urandom | head -c 13; echo)
KUBE_VIP_VERSION=v0.8.0
HOSTNAME=rke2.somequant.club
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We need to do some preparation on each of the hosts:&lt;/p&gt;

&lt;p&gt;Creating a few directories RKE2 will need:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for i in {NODE_ADDRESS_0,NODE_ADDRESS_1,NODE_ADDRESS_2}; do ssh $USER@$i "sudo mkdir -p /etc/rancher/rke2; sudo mkdir -p /var/lib/rancher/rke2/server/manifests/"; done
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;if your &lt;code&gt;HOSTNAME&lt;/code&gt; variable is not a FQDN or resolvable with DNS, you might want to do another loop to add an &lt;code&gt;/etc/hosts&lt;/code&gt; entry as well:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for i in {NODE_ADDRESS_0,NODE_ADDRESS_1,NODE_ADDRESS_2}; do ssh $USER@$i "echo '$KUBE_VIP_ADDRESS $HOSTNAME' | sudo tee -a /etc/hosts"; done
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, prepare the RKE2 config file for the first control plane node:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;lt;&amp;lt;EOF | tee rke2-prime-config.yaml
token: ${CLUSTER_TOKEN}
tls-san:
- ${HOSTNAME}
- ${KUBE_VIP_ADDRESS}
- ${NODE_ADDRESS_0}
- ${NODE_ADDRESS_1}
- ${NODE_ADDRESS_2}
write-kubeconfig-mode: 600
etcd-expose-metrics: true
cni:
- cilium
disable:
- rke2-ingress-nginx
- rke2-snapshot-validation-webhook
- rke2-snapshot-controller
- rke2-snapshot-controller-crd
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;then create (and set aside for now) the config for the other two control plane nodes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;lt;&amp;lt;EOF | tee rke2-peer-config.yaml
server: https://${KUBE_VIP_ADDRESS}:9345
token: ${CLUSTER_TOKEN}
tls-san:
- ${HOSTNAME}
- ${KUBE_VIP_ADDRESS}
- ${NODE_ADDRESS_0}
- ${NODE_ADDRESS_1}
- ${NODE_ADDRESS_2}
write-kubeconfig-mode: 600
etcd-expose-metrics: true
cni:
- cilium
disable:
- rke2-ingress-nginx
- rke2-snapshot-validation-webhook
- rke2-snapshot-controller
- rke2-snapshot-controller-crd
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Transfer &lt;code&gt;rke2-prime-config.yaml&lt;/code&gt; to &lt;code&gt;NODE_ADDRESS_0&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;scp rke2-prime-config.yaml $USER@$NODE_ADDRESS_0:/etc/rancher/rke2/config.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, login to &lt;code&gt;NODE_ADDRESS_0&lt;/code&gt;, and we will install and enable RKE2:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE=server sh -

systemctl enable rke2-server.service
systemctl start rke2-server.service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once this step completes, wait until the node is in a &lt;code&gt;Ready&lt;/code&gt; state in this output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export PATH=$PATH:/var/lib/rancher/rke2/bin
export KUBECONFIG=/etc/rancher/rke2/rke2.yaml
kubectl get nodes -w
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You are now ready to install kube-vip. From the above, set your &lt;code&gt;INTERFACE&lt;/code&gt; and &lt;code&gt;KUBE_VIP_ADDRESS&lt;/code&gt; variables on the host as well as the &lt;code&gt;KUBE_VIP_VERSION&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;Now, we can proceed to create our kube-vip manifests:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f https://kube-vip.io/manifests/rbac.yaml

alias kube-vip="ctr --address /run/k3s/containerd/containerd.sock --namespace k8s.io image pull ghcr.io/kube-vip/kube-vip:$KUBE_VIP_VERSION; ctr --address /run/k3s/containerd/containerd.sock --namespace k8s.io run --rm --net-host ghcr.io/kube-vip/kube-vip:$KUBE_VIP_VERSION vip /kube-vip"

kube-vip manifest daemonset \
    --interface $INTERFACE \
    --address $KUBE_VIP_ADDRESS \
    --inCluster \
    --taint \
    --controlplane \
    --services \
    --arp \
    --leaderElection
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This latter command will output a YAML manifest, which will not be executed; you can, either, redirect the output to a file (i.e. &lt;code&gt;| tee -a kube-vip.yaml&lt;/code&gt; and &lt;code&gt;kubectl apply -f&lt;/code&gt; as you might normally) or, as RKE2 allows you to do, redirect it to &lt;code&gt;/var/lib/rancher/rke2/server/manifests/kube-vip.yaml&lt;/code&gt; where it will be applied automatically.&lt;/p&gt;

&lt;p&gt;Once applied, you can confirm the status of the kube-vip DaemonSet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get ds -n kube-system kube-vip-ds

root@fed7534656e3:~# kubectl get ds -n kube-system kube-vip-ds
NAME          DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
kube-vip-ds   1         1         1       1            1           &amp;lt;none&amp;gt;          10s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At which point, you should be able to ping, both, your VIP and the hostname (if you added the hosts entry, or the DNS can be resolved, this was optional):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@fed7534656e3:~# ping rke2.somequant.club
PING rke2-vip.freeipad.internal (192.168.0.200) 56(84) bytes of data.
64 bytes from rke2.somequant.club (192.168.0.200): icmp_seq=1 ttl=64 time=0.079 ms
64 bytes from rke2.somequant.club (192.168.0.200): icmp_seq=2 ttl=64 time=0.068 ms
64 bytes from rke2.somequant.club (192.168.0.200): icmp_seq=3 ttl=64 time=0.083 ms
64 bytes from rke2.somequant.club (192.168.0.200): icmp_seq=4 ttl=64 time=0.055 ms
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, we can proceed to joining our other two control plane nodes.&lt;/p&gt;

&lt;p&gt;Copy your &lt;code&gt;rke2-peer-config.yaml&lt;/code&gt; to &lt;code&gt;NODE_ADDRESS_1&lt;/code&gt; and &lt;code&gt;2&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for i in {NODE_ADDRESS_1,NODE_ADDRESS_2}; do scp rke2-peer-config.yaml $USER@$i:/etc/rancher/rke2/config.yaml ; done
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then proceed to install and enable RKE2 as you did on the first node:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for i in {NODE_ADDRESS_1,NODE_ADDRESS_2}; do ssh $USER@$i "curl -sfL https://get.rke2.io | sudo INSTALL_RKE2_TYPE=server sh -"; done

for i in {NODE_ADDRESS_1,NODE_ADDRESS_2}; do ssh $USER@$i "sudo systemctl enable rke2-server.service; sudo systemctl start rke2-server.service"; done
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At which point, you can log back into the first node, and monitor the progress of the newly joining nodes, as they will be using the new VIP (rather than the node address of the first node) to join the cluster (so if the first node goes down, either one (or new nodes) can still be added to the cluster, and worker nodes can remain in contact with the API):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export PATH=$PATH:/var/lib/rancher/rke2/bin
export KUBECONFIG=/etc/rancher/rke2/rke2.yaml
kubectl get nodes -w

kubectl get nodes -o wide
NAME           STATUS   ROLES                       AGE     VERSION           INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION       CONTAINER-RUNTIME
6933af500526   Ready    control-plane,etcd,master   79s     v1.27.12+rke2r1   192.168.0.198   &amp;lt;none&amp;gt;        Ubuntu 22.04.4 LTS   5.15.0-102-generic   containerd://1.7.11-k3s2
8986366dd7e8   Ready    control-plane,etcd,master   34s     v1.27.12+rke2r1   192.168.0.199   &amp;lt;none&amp;gt;        Ubuntu 22.04.4 LTS   5.15.0-102-generic   containerd://1.7.11-k3s2
fed7534656e3   Ready    control-plane,etcd,master   8m51s   v1.27.12+rke2r1   192.168.0.197   &amp;lt;none&amp;gt;        Ubuntu 22.04.4 LTS   5.15.0-102-generic   containerd://1.7.11-k3s2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you login to the node that is currently the leader in the most recent election, you will also see that the &lt;code&gt;INTERFACE&lt;/code&gt; will have the &lt;code&gt;KUBE_VIP_ADDRESS&lt;/code&gt; bound as well:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ubuntu@fed7534656e3:~$ ip addr show enp1s0 | grep inet
    inet 192.168.0.197/24 metric 100 brd 192.168.0.255 scope global dynamic enp1s0
    inet 192.168.0.200/32 scope global enp1s0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;More on Kube-VIP and its features can be found &lt;a href="https://kube-vip.io/docs/installation/daemonset/#arp-example-for-daemonset" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Rancher Backups with NFS-backed PersistentVolume</title>
      <dc:creator>J. Marhee</dc:creator>
      <pubDate>Tue, 26 Mar 2024 20:40:51 +0000</pubDate>
      <link>https://community.ops.io/jmarhee/rancher-backups-with-nfs-backed-persistentvolume-41f5</link>
      <guid>https://community.ops.io/jmarhee/rancher-backups-with-nfs-backed-persistentvolume-41f5</guid>
      <description>&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/docs/concepts/storage/storage-classes/#nfs" rel="noopener noreferrer"&gt;An external NFS provisioner&lt;/a&gt; is required to use an NFS mount as a PersistentVolume backing. This example uses the &lt;a href="https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner" rel="noopener noreferrer"&gt;nfs-subdir-external-provisioner&lt;/a&gt; which will create the NFS storage class, nfs-client. &lt;/p&gt;

&lt;p&gt;It should be installed with the reclaim policy set to Retain, to ensure the backup is not lost when a volume is deleted.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm &lt;span class="nb"&gt;install &lt;/span&gt;nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner     &lt;span class="nt"&gt;--set&lt;/span&gt; nfs.server&lt;span class="o"&gt;=&lt;/span&gt;192.168.0.11     &lt;span class="nt"&gt;--set&lt;/span&gt; nfs.path&lt;span class="o"&gt;=&lt;/span&gt;/var/nfs/general &lt;span class="nt"&gt;--set&lt;/span&gt; nfs.reclaimPolicy&lt;span class="o"&gt;=&lt;/span&gt;Retain &lt;span class="nt"&gt;--set&lt;/span&gt; “nfs.mountOptions&lt;span class="o"&gt;={}&lt;/span&gt;”
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Configuring the Backup Operator
&lt;/h2&gt;

&lt;p&gt;In your local cluster in Rancher, select the Rancher Backup application from the App catalog. Upon beginning the installation, you will select “Use an existing StorageClass”:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/dzZrLg6ktFzlLTNhHC6JxD6dQF_KSPMGrhdstdpt4cU/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL3VhOXlx/aWRlYzFrNXZoZmFh/eGViLnBuZw" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/dzZrLg6ktFzlLTNhHC6JxD6dQF_KSPMGrhdstdpt4cU/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL3VhOXlx/aWRlYzFrNXZoZmFh/eGViLnBuZw" alt="Radio button offering option of existing storage class as backup target" width="720" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And select the newly installed nfs-client StorageClass and a size for the backup volume:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/KwKwE4ZTaszrxkkFHNU1q6pOUk5grKTIdnmb8Nrn0hw/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL3Rud2c1/NWwweHFqb2toZ3ho/bnM0LnBuZw" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/KwKwE4ZTaszrxkkFHNU1q6pOUk5grKTIdnmb8Nrn0hw/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL3Rud2c1/NWwweHFqb2toZ3ho/bnM0LnBuZw" alt="dropdown with storageclass name of nfs-client" width="720" height="92"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Upon completing installation, you can verify the status of the PV as Bound:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pv
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://community.ops.io/images/fuWx1hM2BhrJ8d9XpIgN21E1UsxkQNUWLrjLVSm-T9o/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzLzNhcmsz/c2V2Z3h1ZjQ5dnB6/anR0LnBuZw" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/fuWx1hM2BhrJ8d9XpIgN21E1UsxkQNUWLrjLVSm-T9o/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzLzNhcmsz/c2V2Z3h1ZjQ5dnB6/anR0LnBuZw" alt="kubectl output of persistentvolume status" width="800" height="24"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Reclaim Policy, Access Mode, and Capacity will be visible in the output of this command as well.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>rancher</category>
      <category>nfs</category>
      <category>storage</category>
    </item>
    <item>
      <title>Using Makefiles to build switching between Podman and Docker automatically</title>
      <dc:creator>J. Marhee</dc:creator>
      <pubDate>Mon, 19 Jun 2023 01:49:00 +0000</pubDate>
      <link>https://community.ops.io/jmarhee/using-makefiles-to-build-switching-between-podman-and-docker-automatically-5dil</link>
      <guid>https://community.ops.io/jmarhee/using-makefiles-to-build-switching-between-podman-and-docker-automatically-5dil</guid>
      <description>&lt;p&gt;I use a &lt;code&gt;Makefile&lt;/code&gt; with most of my projects, and the ones that are packaged as containers, this usually consists of a step like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker build &lt;span class="nt"&gt;-t&lt;/span&gt; me/my-project:some-tag
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;but I recently began using a system that uses Podman by default, so I could alias &lt;code&gt;podman&lt;/code&gt; to &lt;code&gt;docker&lt;/code&gt;, but more properly, I just updated my &lt;code&gt;Makefile&lt;/code&gt; with the following to make the build step conditional and more flexible between hosts:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight make"&gt;&lt;code&gt;&lt;span class="k"&gt;ifeq&lt;/span&gt; &lt;span class="nv"&gt;($(shell command -v podman 2&amp;gt; /dev/null),)&lt;/span&gt;
    &lt;span class="nv"&gt;CMD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;docker
&lt;span class="k"&gt;else&lt;/span&gt;
    &lt;span class="nv"&gt;CMD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;podman
&lt;span class="k"&gt;endif&lt;/span&gt;
&lt;span class="err"&gt;...&lt;/span&gt;
&lt;span class="nl"&gt;build-image&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
    &lt;span class="p"&gt;$(&lt;/span&gt;CMD&lt;span class="p"&gt;)&lt;/span&gt; build &lt;span class="nt"&gt;-t&lt;/span&gt; jmarhee/scrivener:&lt;span class="p"&gt;$(&lt;/span&gt;TAG&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="nt"&gt;--no-cache&lt;/span&gt;
&lt;span class="err"&gt;...&lt;/span&gt;
&lt;span class="nl"&gt;quick-launch&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
    x11docker &lt;span class="nt"&gt;--backend&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;$(&lt;/span&gt;CMD&lt;span class="p"&gt;)&lt;/span&gt; jmarhee/scrivener
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;where in the above example, it detects not only that I'm running podman, but also updates other configurations that rely on the container runtime as well (in this case, launching a GUI application from the container)&lt;/p&gt;

</description>
      <category>devops</category>
      <category>productivity</category>
      <category>kubernetes</category>
      <category>cloudops</category>
    </item>
    <item>
      <title>Updating DNS Resolution in Kubernetes</title>
      <dc:creator>J. Marhee</dc:creator>
      <pubDate>Tue, 18 Apr 2023 23:22:33 +0000</pubDate>
      <link>https://community.ops.io/jmarhee/updating-dns-resolution-in-kubernetes-3kei</link>
      <guid>https://community.ops.io/jmarhee/updating-dns-resolution-in-kubernetes-3kei</guid>
      <description>&lt;p&gt;There are situations where one would prefer a Kubernetes workload to have an updated set of DNS nameservers, and there are two sets of solutions, ones that involve overriding host DNS resolution (so for example, the values in &lt;code&gt;/etc/resolv.conf&lt;/code&gt; are overridden by a custom &lt;code&gt;resolv.conf&lt;/code&gt; provided using various methods) or those scoped to the workload as part of the &lt;code&gt;Pod&lt;/code&gt; spec. This piece will cover a few of those methods. &lt;/p&gt;

&lt;p&gt;If you are using &lt;code&gt;kube-dns&lt;/code&gt;, this can be updated cluster-wide using a &lt;code&gt;ConfigMap&lt;/code&gt; like the following to specify your DNS nameservers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConfigMap&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kube-dns&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kube-system&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;upstreamNameservers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;["8.8.8.8"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the alternative, a custom &lt;code&gt;resolv.conf&lt;/code&gt; file can be provided to the &lt;a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="noopener noreferrer"&gt;Kubelet&lt;/a&gt; via its CLI flag, &lt;code&gt;--resolv-conf string&lt;/code&gt;, so for example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nameserver 8.8.8.8
nameserver 8.8.4.4
search ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and the Kubelet will expose that alternative resolver configuration rather than the host's.&lt;/p&gt;

&lt;p&gt;For multi-tenant configurations, a the &lt;code&gt;Pod&lt;/code&gt; spec supports a &lt;a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-dns-config" rel="noopener noreferrer"&gt;&lt;code&gt;dnsConfig&lt;/code&gt;&lt;/a&gt; block that allows you to build your desired resolution settings into your workload definitions, from the linked documentation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dns-example&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test&lt;/span&gt;
      &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;dnsPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;None"&lt;/span&gt;
  &lt;span class="na"&gt;dnsConfig&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;nameservers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;192.0.2.1&lt;/span&gt; &lt;span class="c1"&gt;# this is an example&lt;/span&gt;
    &lt;span class="na"&gt;searches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ns1.svc.cluster-domain.example&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;my.dns.search.suffix&lt;/span&gt;
    &lt;span class="na"&gt;options&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ndots&lt;/span&gt;
        &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;edns0&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The reason this would be advantageous for multi-tenant clusters is that it would allow specific workloads to use these DNS settings, rather than setting them for the cluster level, where a single-tenant might find this more desirable. &lt;/p&gt;

&lt;p&gt;Here are some additional DNS-related resources for, both, updating resolvers and modifying other DNS services in Kubernetes:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.ioDNS%20for%20Services%20and%20Pods"&gt;DNS for Seervices and Pods&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/" rel="noopener noreferrer"&gt;Customizing DNS Service&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.suse.com/support/kb/doc/?id=000020117" rel="noopener noreferrer"&gt;How to override DNS results served by CoreDNS&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>dns</category>
      <category>devops</category>
      <category>docker</category>
    </item>
    <item>
      <title>Fixing GlusterFS not in 'Peer in Cluster' State Error</title>
      <dc:creator>J. Marhee</dc:creator>
      <pubDate>Thu, 03 Nov 2022 19:24:52 +0000</pubDate>
      <link>https://community.ops.io/jmarhee/fixing-glusterfs-not-in-peer-in-cluster-state-error-70l</link>
      <guid>https://community.ops.io/jmarhee/fixing-glusterfs-not-in-peer-in-cluster-state-error-70l</guid>
      <description>&lt;p&gt;When creating a Glusterfs deployment, after peering your other nodes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gluster peer probe node1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;you can see the status of the cluster using &lt;code&gt;gluster peer status&lt;/code&gt;, where you might see something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Hostname: 192.168.0.122
Uuid: 35c563b6-b54a-4fb6-98a9-243d34e0a82e
State: Accepted peer request (Connected)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;but in this State, you cannot create a volume yet, and you might get an error like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ubuntu@61d8372c31e8:~&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;gluster volume create replicated-data replica 3 192.168.0.121:/gluster/02 192.168.0.122:/gluster/02 192.168.0.123:/gluster/02
volume create: replicated-data: failed: Host 192.168.0.122 is not &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="s1"&gt;'Peer in Cluster'&lt;/span&gt; state
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If it is stuck in this state, using the UUID from the &lt;code&gt;peer status&lt;/code&gt; above, you can modify the &lt;code&gt;/var/lib/glusterd/peers/{UUID}&lt;/code&gt; file, where you'll see a &lt;code&gt;state=4&lt;/code&gt; line. &lt;/p&gt;

&lt;p&gt;Update this line to read &lt;code&gt;state=3&lt;/code&gt; on, both, this node, and the UUID for the first node as well on the peer node if needed, then restart glusterd:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;systemctl restart glusterd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and recheck the peer status:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Hostname: 192.168.0.122
Uuid: 35c563b6-b54a-4fb6-98a9-243d34e0a82e
State: Peer &lt;span class="k"&gt;in &lt;/span&gt;Cluster &lt;span class="o"&gt;(&lt;/span&gt;Connected&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;at which point you can proceed to create a volume. &lt;/p&gt;

</description>
      <category>sysadmin</category>
      <category>storage</category>
      <category>cloudops</category>
    </item>
    <item>
      <title>Preparing a FreeBSD Cloud Image with Cloud-Init</title>
      <dc:creator>J. Marhee</dc:creator>
      <pubDate>Sat, 22 Oct 2022 20:07:38 +0000</pubDate>
      <link>https://community.ops.io/jmarhee/preparing-a-freebsd-cloud-image-with-cloud-init-22lh</link>
      <guid>https://community.ops.io/jmarhee/preparing-a-freebsd-cloud-image-with-cloud-init-22lh</guid>
      <description>&lt;p&gt;I use &lt;a href="https://www.linux-kvm.org/page/Main_Page" rel="noopener noreferrer"&gt;Kernel-based Virtual Machines&lt;/a&gt; or KVM as my primary virtualization method in my development environment, and use &lt;a href="https://cloudinit.readthedocs.io/en/latest/" rel="noopener noreferrer"&gt;cloud-init&lt;/a&gt; to prepare my virtual machine images. There's a variety of ways to procure &lt;a href="https://docs.openstack.org/image-guide/obtain-images.html" rel="noopener noreferrer"&gt;cloud images&lt;/a&gt;, but a lot of times these registries are focused on Linux images, or hypervisor technology like &lt;a href="https://www.linuxvmimages.com/how-to-use/vm-image-password/" rel="noopener noreferrer"&gt;VMWare or VirtualBox&lt;/a&gt;, and while normally useful for me at work (where I primarily work on Linux systems), I was very excited to find a well maintained site for &lt;a href="https://bsd-cloud-image.org/" rel="noopener noreferrer"&gt;BSD cloud images&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;After downloading the image I want (in this case &lt;code&gt;freebsd-13.0-zfs.qcow2&lt;/code&gt;) to my image path in &lt;code&gt;/var/lib/libvirt/images&lt;/code&gt;, I create a project directory for my cloud-init scripts in &lt;code&gt;~/freebsd-cidata&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;instance-id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;freebsd&lt;/span&gt;
&lt;span class="na"&gt;local-hostname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;freebsd&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;in a file called &lt;code&gt;meta-data&lt;/code&gt;, and in &lt;code&gt;user-data&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;#cloud-config&lt;/span&gt;
&lt;span class="na"&gt;users&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;root&lt;/span&gt;
    &lt;span class="na"&gt;lock_passwd&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
    &lt;span class="na"&gt;hashed_passwd&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;PASSWD_HASH&lt;/span&gt;&lt;span class="pi"&gt;}&lt;/span&gt;
    &lt;span class="na"&gt;ssh_pwauth&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;freebsd&lt;/span&gt;
    &lt;span class="na"&gt;ssh_authorized_keys&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ssh-rsa ...&lt;/span&gt;
    &lt;span class="na"&gt;hashed_passwd&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;PASSWD_HASH&lt;/span&gt;&lt;span class="pi"&gt;}&lt;/span&gt;
    &lt;span class="na"&gt;groups&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;wheel&lt;/span&gt;
    &lt;span class="na"&gt;ssh_pwauth&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

&lt;span class="na"&gt;write_files&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/usr/local/etc/sudoers&lt;/span&gt;
    &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
      &lt;span class="s"&gt;%wheel ALL=(ALL) NOPASSWD: ALL&lt;/span&gt;
    &lt;span class="na"&gt;append&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This user-data updates the root password, creates a &lt;code&gt;freebsd&lt;/code&gt; user, adds that user to the &lt;code&gt;wheel&lt;/code&gt; group, adds its SSH public keys, and then updates the sudoers file to allow &lt;code&gt;wheel&lt;/code&gt; group passwordless &lt;code&gt;sudo&lt;/code&gt; access (common pattern in cloud images, but this can be done any way you'd like). I recommend taking a look at the &lt;a href="https://cloudinit.readthedocs.io/en/latest/topics/examples.html" rel="noopener noreferrer"&gt;cloud-init example&lt;/a&gt; for other options and automation you can bake into your user-data file.&lt;/p&gt;

&lt;p&gt;For example, on my Ubuntu images, I like to do a little more that I am doing above for the selected FreeBSD image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;#cloud-config&lt;/span&gt;
&lt;span class="na"&gt;users&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu&lt;/span&gt;
    &lt;span class="na"&gt;ssh_authorized_keys&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ssh-rsa ...&lt;/span&gt;
    &lt;span class="na"&gt;sudo&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;ALL=(ALL)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;NOPASSWD:ALL'&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;groups&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sudo&lt;/span&gt;
    &lt;span class="na"&gt;shell&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/bin/bash&lt;/span&gt;
&lt;span class="na"&gt;runcmd&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;hostnamectl set-hostname $(openssl rand -hex 6)&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;echo PubkeyAuthentication yes | sudo tee -a /etc/ssh/sshd_config&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;echo PubkeyAcceptedKeyTypes=+ssh-rsa | sudo tee -a /etc/ssh/sshd_config&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;systemctl restart ssh&lt;/span&gt;
&lt;span class="na"&gt;write_files&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;Ubuntu 22.04 LTS \n \l&lt;/span&gt;
    &lt;span class="s"&gt;IPv4: \4{enp1s0}&lt;/span&gt;
  &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/etc/issue&lt;/span&gt;
&lt;span class="na"&gt;permissions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;0644'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;where in this case, on Ubuntu systems, I like to generate a random hostname, things like updating SSH configuration options, and updating the console to show the IP address of the instance. &lt;/p&gt;

&lt;p&gt;I like to use a password for the users, if you prefer only to use SSH keys, you can add a new list item for each key under &lt;code&gt;ssh_authorized_keys&lt;/code&gt;, but if you want to set a password as well, you can create the password hash using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;mkpasswd &lt;span class="nt"&gt;--method&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;SHA-512 &lt;span class="nt"&gt;--rounds&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;4096
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and populating that value in your user-data file. &lt;/p&gt;

&lt;p&gt;Once those files are saved, you can create the cloud-init iso:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;genisoimage &lt;span class="nt"&gt;-output&lt;/span&gt; /var/lib/libvirt/images/cidata-freebsd.iso &lt;span class="nt"&gt;-V&lt;/span&gt; cidata &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="nt"&gt;-J&lt;/span&gt; ./user-data ./meta-data
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can then, back in your &lt;code&gt;/var/lib/libvirt/images&lt;/code&gt; directory, proceed to create the VM, first by creating a clone of your cloud image for use on the new VM:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;qemu-img create &lt;span class="nt"&gt;-b&lt;/span&gt; freebsd-13.0-zfs.qcow2 &lt;span class="nt"&gt;-f&lt;/span&gt; qcow2 &lt;span class="nt"&gt;-F&lt;/span&gt; qcow2 &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;your&lt;/span&gt;&lt;span class="p"&gt; VM name&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;.img 80G
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and then optionally using a too like &lt;a href="https://linux.die.net/man/1/virt-install" rel="noopener noreferrer"&gt;virt-install&lt;/a&gt; to automate some of the work of using libvirt with KVM, consume both the disk image and the cloud-init ISO file along with your other VM specifications:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;virt-install &lt;span class="nt"&gt;--name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;VM_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;--ram&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;4196 &lt;span class="nt"&gt;--vcpus&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2 &lt;span class="nt"&gt;--import&lt;/span&gt; &lt;span class="nt"&gt;--disk&lt;/span&gt; &lt;span class="nv"&gt;path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;VM_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;.img,format&lt;span class="o"&gt;=&lt;/span&gt;qcow2 &lt;span class="nt"&gt;--disk&lt;/span&gt; &lt;span class="nv"&gt;path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;cidata-freebsd.iso,device&lt;span class="o"&gt;=&lt;/span&gt;cdrom &lt;span class="nt"&gt;--os-variant&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;freebsd13.0 &lt;span class="nt"&gt;--network&lt;/span&gt; &lt;span class="nv"&gt;bridge&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;br0,model&lt;span class="o"&gt;=&lt;/span&gt;virtio &lt;span class="nt"&gt;--graphics&lt;/span&gt; vnc,listen&lt;span class="o"&gt;=&lt;/span&gt;0.0.0.0 &lt;span class="nt"&gt;--noautoconsole&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can then view the spin up by running &lt;code&gt;virsh list&lt;/code&gt;, getting the VM ID, and attaching the console to watch for cloud-init errors, &lt;code&gt;virsh console ${ID}&lt;/code&gt;. The console output will contain a field like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/usr/local/bin/cloud-init startingCloud-init v. 21.2 running 'init' at Sat, 22 Oct 2022 19:41:45 +0000. Up 13.766839027404785 seconds.
ci-info: ++++++++++++++++++++++++++++++Net device info++++++++++++++++++++++++++++++
ci-info: +--------+------+----------------+------------+-------+-------------------+
ci-info: | Device |  Up  |    Address     |    Mask    | Scope |     Hw-Address    |
ci-info: +--------+------+----------------+------------+-------+-------------------+
ci-info: |  lo0   | True |   127.0.0.1    | 0xff000000 |   .   |         .         |
ci-info: |  lo0   | True |    ::1/128     |     .      |   .   |         .         |
ci-info: |  lo0   | True | fe80::1%lo0/64 |     .      |  0x2  |         .         |
ci-info: | vtnet0 | True | 192.168.0.115  | 0xffffff00 |   .   | 52:54:00:f4:32:c2 |
ci-info: +--------+------+----------------+------------+-------+-------------------+
ci-info:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;containing your instance networking information, so you can attempt to connect via SSH (if you don't know the instance IP), or if you set a password, you can proceed to login from this console as well. &lt;/p&gt;

</description>
      <category>freebsd</category>
      <category>cloudinit</category>
      <category>automation</category>
      <category>virtualization</category>
    </item>
    <item>
      <title>How to delete a Kubernetes namespace stuck at Terminating Status</title>
      <dc:creator>J. Marhee</dc:creator>
      <pubDate>Wed, 17 Aug 2022 21:27:46 +0000</pubDate>
      <link>https://community.ops.io/jmarhee/how-to-delete-a-kubernetes-namespace-stuck-at-terminating-status-1aen</link>
      <guid>https://community.ops.io/jmarhee/how-to-delete-a-kubernetes-namespace-stuck-at-terminating-status-1aen</guid>
      <description>&lt;p&gt;After running a command like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl delete my-namespace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and the command hangs indefinitely after accepting the deletion request, you might find it sitting at &lt;code&gt;Terminating&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get ns/my-namespace
NAME      STATUS         AGE
testing   Terminating    113m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This might be due to a &lt;a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/finalizers/" rel="noopener noreferrer"&gt;Finalizer&lt;/a&gt; in the spec, specific conditions to be met during deletion. You can view the finalizers for your namespace:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"spec"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
       &lt;/span&gt;&lt;span class="nl"&gt;"finalizers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
           &lt;/span&gt;&lt;span class="s2"&gt;"kubernetes"&lt;/span&gt;&lt;span class="w"&gt;
       &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is what a standard namespace with no other conditions might otherwise look like, so if it's hanging, this is likely way.&lt;/p&gt;

&lt;p&gt;You can modify the resource using &lt;code&gt;kubectl edit ns/my-namespace&lt;/code&gt;, and replace the finalizer with an empty array:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"spec"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
       &lt;/span&gt;&lt;span class="nl"&gt;"finalizers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
       &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;or use the Kube API to update the specification to allow it to finalize manually if this is unsuccessful by dumping the resource:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get ns/my-namespace &lt;span class="nt"&gt;-o&lt;/span&gt; json &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; my-namespace.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and editing that &lt;code&gt;my-namespace.json&lt;/code&gt; to remove the finalizer, and then making a &lt;code&gt;PUT&lt;/code&gt; request to the API:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl replace &lt;span class="nt"&gt;--raw&lt;/span&gt; &lt;span class="s2"&gt;"/api/v1/namespaces/my-namespace/finalize"&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; ./my-namespace.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;or using an HTTP client to issue the request (usually will require authentication if not exposing via &lt;a href="https://kubernetes.io/docs/tasks/extend-kubernetes/http-proxy-access-api/" rel="noopener noreferrer"&gt;&lt;code&gt;kubectl proxy&lt;/code&gt;&lt;/a&gt;) using that same json dump:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-k&lt;/span&gt; &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="nt"&gt;-X&lt;/span&gt; PUT &lt;span class="nt"&gt;--data-binary&lt;/span&gt; @my-namespace.json &lt;span class="s2"&gt;"https://&amp;lt;Kube API Endpoint&amp;gt;/api/v1/namespaces/my-namespace/finalize"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can then confirm, once you receive a successful response, that the namespace is deleted:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get ns/my-namespace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;which should return no resource. &lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>sysadmin</category>
      <category>devops</category>
      <category>containers</category>
    </item>
    <item>
      <title>Exporting images with Containerd CLI (ctr)</title>
      <dc:creator>J. Marhee</dc:creator>
      <pubDate>Sun, 07 Aug 2022 03:28:53 +0000</pubDate>
      <link>https://community.ops.io/jmarhee/exporting-images-with-containerd-cli-ctr-163n</link>
      <guid>https://community.ops.io/jmarhee/exporting-images-with-containerd-cli-ctr-163n</guid>
      <description>&lt;p&gt;I recently found that a Docker image I use as a part of one of my Helm charts was no longer available on DockerHub, and I hadn't mirrored it in another location. It was running in a &lt;a href="//k3s.io"&gt;K3s&lt;/a&gt; cluster, meaning I couldn't &lt;code&gt;docker tag original-maintainer/image:tag me/image:tag&lt;/code&gt; it and push to the Hub myself back on my local machine, which was running the Docker CLI. &lt;/p&gt;

&lt;p&gt;First, logging into a node with the image on it, I ran the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ctr &lt;span class="nt"&gt;-n&lt;/span&gt; k8s.io images &lt;span class="nb"&gt;export &lt;/span&gt;bind9:9.11.tar docker.io/internetsystemsconsortium/bind9:9.11 &lt;span class="nt"&gt;--platform&lt;/span&gt; linux/amd64
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;which exported the image to a tarball. This is similar to &lt;a href="https://docs.docker.com/engine/reference/commandline/image_save/" rel="noopener noreferrer"&gt;&lt;code&gt;docker image save&lt;/code&gt;&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Then, pulling it down to my Docker CLI machines:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;scp jdmarhee@192.168.0.60:bind9:9.11.tar &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I then &lt;a href="https://docs.docker.com/engine/reference/commandline/load/" rel="noopener noreferrer"&gt;&lt;code&gt;load&lt;/code&gt;&lt;/a&gt; the image from the tarball:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt; docker image load &lt;span class="nt"&gt;--input&lt;/span&gt; bind9&lt;span class="se"&gt;\:&lt;/span&gt;9.11.tar
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;which returns the original tag for the image (&lt;code&gt;internetsystemsconsortium/bind9:9.11&lt;/code&gt;) so I can re-tag it and push:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker image tag internetsystemsconsortium/bind9:9.11 jmarhee/bind9:9.11

docker push jmarhee/bind9:9.11
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now this image is available on Docker Hub for my (and your) new clusters to access. &lt;/p&gt;

</description>
      <category>containerd</category>
      <category>docker</category>
      <category>dns</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
