<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>The Ops Community ⚙️: Alexandru Dejanu</title>
    <description>The latest articles on The Ops Community ⚙️ by Alexandru Dejanu (@dejanualex).</description>
    <link>https://community.ops.io/dejanualex</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://community.ops.io/feed/dejanualex"/>
    <language>en</language>
    <item>
      <title>EKS: Guide to create Kubernetes clusters in AWS</title>
      <dc:creator>Alexandru Dejanu</dc:creator>
      <pubDate>Thu, 13 Jul 2023 15:47:22 +0000</pubDate>
      <link>https://community.ops.io/dejanualex/eks-guide-to-create-kubernetes-clusters-in-aws-3p1o</link>
      <guid>https://community.ops.io/dejanualex/eks-guide-to-create-kubernetes-clusters-in-aws-3p1o</guid>
      <description>&lt;p&gt;Amazon Elastic Kubernetes Service (EKS) is a managed service within AWS, that allows you to run a Kubernetes cluster. Managed service translates into the fact that there's no need to install, or maintain the cluster's control or data plane.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You're going you need a user with the right policies for services like EKS, CloudFormation, EC2, IAM, and an access key for that user (&lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_CreateAccessKey"&gt;guide here&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tooling&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;kubectl&lt;/code&gt; - CLI for communicating with Kubernetes (&lt;a href="https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/"&gt;installation guide here&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;aws&lt;/code&gt; - CLI for interacting with AWS services (&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html"&gt;installation guide here&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;eksctl&lt;/code&gt; - CLI for creating and managing clusters on EKS (&lt;a href="https://eksctl.io/introduction/#for-unix"&gt;installation guide here&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Configuration&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;aws iam&lt;/code&gt; authenticator that allows you to use AWS IAM credentials to authenticate (&lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/install-aws-iam-authenticator.html"&gt;installation guide here&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/cli/latest/reference/configure/index.html"&gt;Configure AWS CLI&lt;/a&gt;: run the following command without arguments in order to get prompted for configuration values (e.g. AWS Access Key Id and your AWS Secret Access Key, AWS region).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws configure
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/cli/latest/reference/iam/index.html"&gt;IAM AWS CLI&lt;/a&gt;: for &lt;code&gt;eksctl&lt;/code&gt; you will need to have AWS API credentials configured. Amazon EKS uses the IAM service to provide authentication to your Kubernetes cluster through the &lt;a href="//AWS%20IAM%20authenticator%20for%20Kubernetes."&gt;AWS IAM authenticator for Kubernetes&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Next you can verify if you're authenticated by running the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws iam get-user
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Creating Kubernetes cluster&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First you can list if there are any existing clusters (normally you should not have them if this is a fresh setup).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;eksctl get clusters
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To allow SSH access to nodes, &lt;code&gt;eksctl&lt;/code&gt; imports by default the ssh public key from &lt;code&gt;~/.ssh/id_rsa.pub&lt;/code&gt; , but if you want you can use another SSH public key by passing the absolute path to the key to &lt;code&gt;--ssh-public-key&lt;/code&gt; flag.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html"&gt;EKS clusters&lt;/a&gt; run in a VPC, therefore you need an &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html"&gt;Amazon VPC&lt;/a&gt; with public and private subnets. The VPC must have a sufficient number of IP addresses available for the cluster, any nodes, and other Kubernetes resources that you want to create, and also it must have DNS hostname and DNS resolution support (otherwise nodes can't register to the cluster). You can't change which subnets you want to use after cluster creation.&lt;/p&gt;

&lt;p&gt;The beauty of it, is that &lt;code&gt;eksctl&lt;/code&gt; will do all the heavy lifting for you, and even more it allows to &lt;a href="https://eksctl.io/introduction/"&gt;customize your Kubernetes cluster&lt;/a&gt; as needed (number of nodes, region, size of the nodes).&lt;/p&gt;

&lt;p&gt;Example for 2 cluster node in the eu-west-1 region:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;eksctl create cluster &lt;span class="nt"&gt;--name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;demo_cluster &lt;span class="nt"&gt;--nodes&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2 &lt;span class="nt"&gt;--region&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;eu-west-1 &lt;span class="nt"&gt;--ssh-public-key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;eks_key.pub
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Behind the scenes &lt;code&gt;eksctl&lt;/code&gt; uses CloudFormation, you can see that in this case, it creates 2 CloudFormation stacks, one for cluster itself (control plane) and one for the initial managed nodegroup (woker nodes).&lt;/p&gt;

&lt;p&gt;Furthermore you can use &lt;a href="https://console.aws.amazon.com/cloudformation/home"&gt;CloudFormation console&lt;/a&gt; to check the status of it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/6JKm2bNBlPyxTE7ey5XoAjomYnP0Kd-ZnAgnE9QBoaM/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMvOG5w/eXM4azlveGt6ZjVv/OWJnNmkucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/6JKm2bNBlPyxTE7ey5XoAjomYnP0Kd-ZnAgnE9QBoaM/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMvOG5w/eXM4azlveGt6ZjVv/OWJnNmkucG5n" alt="Image description" width="800" height="138"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After the cluster was created everything is set, you can verify that &lt;code&gt;kubectl&lt;/code&gt; point to the correct cluster by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl config current-context
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Leveraging &lt;code&gt;eksctl&lt;/code&gt; you can deploy a Kubernetes cluster in AWS in a matter of minutes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/H6AZqXWbWGywnIn3MYPXHRN-Db7XvyJ2dOUfXndYBbk/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMvemE5/N2RxNzFyNjUwcGx6/ZWJpZzAucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/H6AZqXWbWGywnIn3MYPXHRN-Db7XvyJ2dOUfXndYBbk/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMvemE5/N2RxNzFyNjUwcGx6/ZWJpZzAucG5n" alt="Image description" width="589" height="75"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>aws</category>
      <category>kubernetes</category>
      <category>tutorials</category>
    </item>
    <item>
      <title>Linux: Execute commands in parallel without parallel-ssh or Ansible</title>
      <dc:creator>Alexandru Dejanu</dc:creator>
      <pubDate>Thu, 08 Jun 2023 12:07:06 +0000</pubDate>
      <link>https://community.ops.io/dejanualex/linux-execute-commands-in-parallel-without-parallel-ssh-or-ansible-2ae1</link>
      <guid>https://community.ops.io/dejanualex/linux-execute-commands-in-parallel-without-parallel-ssh-or-ansible-2ae1</guid>
      <description>&lt;p&gt;I’ve decided to write this bite-sized article after reading Linux: Execute commands in parallel with parallel-ssh, but I was confronted with a particular use case in which I didn’t have access to the internet and I faced constraints that prevented me from installing new packages.&lt;/p&gt;

&lt;p&gt;Without much at hand, I’ve defined a server list, and scripted the following approach:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;
&lt;span class="c"&gt;## #####################################################################&lt;/span&gt;
&lt;span class="c"&gt;## run concurrently command passed as argv on multiple remote servers ##&lt;/span&gt;
&lt;span class="c"&gt;## UPDATE: servers array and user variable                            ##&lt;/span&gt;
&lt;span class="c"&gt;########################################################################&lt;/span&gt;
&lt;span class="c"&gt;# define an array of remote servers&lt;/span&gt;
&lt;span class="nv"&gt;servers&lt;/span&gt;&lt;span class="o"&gt;=(&lt;/span&gt;&lt;span class="s2"&gt;"server1.fqdn"&lt;/span&gt; &lt;span class="s2"&gt;"server2.fqdn"&lt;/span&gt; &lt;span class="s2"&gt;"server3.fqdn"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="c"&gt;# Function to execute command on a remote server&lt;/span&gt;
execute_command&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nv"&gt;server&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt;
    &lt;span class="nb"&gt;command&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$2&lt;/span&gt;
    &lt;span class="c"&gt;# define USER for the ssh connection &lt;/span&gt;
    ssh user@&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$server&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$command&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;for &lt;/span&gt;server &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;servers&lt;/span&gt;&lt;span class="p"&gt;[@]&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="c"&gt;# exec arg command as background process&lt;/span&gt;
    execute_command &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$server&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &amp;amp;
&lt;span class="k"&gt;done&lt;/span&gt;
&lt;span class="c"&gt;# wait for all background processes to complete&lt;/span&gt;
&lt;span class="nb"&gt;wait&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The script initiates multiple SSH connections and uses &lt;code&gt;wait&lt;/code&gt; and &lt;code&gt;&amp;amp;&lt;/code&gt; operator approach to achieve a rudimentary concurrency, and the usage is pretty straightforward:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./pssh.sh &lt;span class="s2"&gt;"hostname -A"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Also if you fancy a sequential one-liner, define &lt;code&gt;hosts_file&lt;/code&gt; that contains the servers, replace &lt;code&gt;user&lt;/code&gt; and run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="k"&gt;while &lt;/span&gt;&lt;span class="nb"&gt;read&lt;/span&gt; &lt;span class="nt"&gt;-u10&lt;/span&gt; line&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="k"&gt;do &lt;/span&gt;ssh user@&lt;span class="nv"&gt;$line&lt;/span&gt; &lt;span class="s1"&gt;'hostname -A'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="k"&gt;done &lt;/span&gt;10&amp;lt;hosts_file
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or if you prefer to pass a script to the one-liner, just replace &lt;code&gt;script.sh&lt;/code&gt; with the desired script.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="k"&gt;while &lt;/span&gt;&lt;span class="nb"&gt;read&lt;/span&gt; &lt;span class="nt"&gt;-u10&lt;/span&gt; line&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="k"&gt;do &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nv"&gt;$line&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;ssh user@&lt;span class="nv"&gt;$line&lt;/span&gt; &lt;span class="s1"&gt;'bash -s'&lt;/span&gt; &amp;lt; script.sh&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="k"&gt;done &lt;/span&gt;10&amp;lt; servers.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>devops</category>
      <category>productivity</category>
      <category>automation</category>
    </item>
    <item>
      <title>Iron Bank: Secure Registries, Secure Containers</title>
      <dc:creator>Alexandru Dejanu</dc:creator>
      <pubDate>Thu, 09 Feb 2023 07:46:08 +0000</pubDate>
      <link>https://community.ops.io/dejanualex/iron-bank-secure-registries-secure-containers-5hgo</link>
      <guid>https://community.ops.io/dejanualex/iron-bank-secure-registries-secure-containers-5hgo</guid>
      <description>&lt;p&gt;&lt;strong&gt;Iron Bank – DoD Centralized Artifacts Repository (DCAR)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Long story short...&lt;strong&gt;Iron Bank&lt;/strong&gt; Centralized Artifacts Repository is a trusted repository management system to store U.S. Department of Defense artifacts. All artifacts stored in Iron Bank are hardened according to the Container Hardening Guide. DoD containers are OCI compliant and also have DoD-wide reciprocity across classifications.&lt;/p&gt;

&lt;p&gt;Some of the main points that reduce the container's attack surface:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Image building must start with a trusted image with known content from a trusted source.
&lt;img src="https://community.ops.io/images/F1A6RZFEND5V34_orMQEgDz4pfSSvhOIin_UbHOy-KU/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMvOW81/bjZiNHV2MmFxajZy/dDFveXoucG5n" alt="container_Hardening_Guide" width="616" height="130"&gt;
&lt;/li&gt;
&lt;li&gt;Use &lt;strong&gt;distroless&lt;/strong&gt; images (which contain only application and its runtime dependencies, and don't include package managers/shells or any other programs you would expect to find in a standard Linux distribution). All &lt;strong&gt;distroless&lt;/strong&gt; images are signed by &lt;a href="https://github.com/sigstore/cosign"&gt;cosign&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;The container image must be built with a non-privileged user in the build file.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Without further ado, let's give it a try. Platform one provides two front-ends when it comes to user experience:&lt;/p&gt;

&lt;p&gt;1) Iron Bank DCAR&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/RcPJSH8BQuQTp26uRViexZuxn1jQsXtMwWr2U3BvcTY/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMva2Fj/bXprYmc0NjN6Y3Yx/a3ZvbTEucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/RcPJSH8BQuQTp26uRViexZuxn1jQsXtMwWr2U3BvcTY/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMva2Fj/bXprYmc0NjN6Y3Yx/a3ZvbTEucG5n" alt="IronBank repo" width="800" height="299"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2) &lt;a href="https://goharbor.io/"&gt;Harbor&lt;/a&gt; instance registry&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/yPjWMGShPX0JLkqzgdalp-Ym3txF6PGGb-5YcewM7f4/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMvZTlp/N2x4bjRrNDlzOGY2/NWw5MGEucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/yPjWMGShPX0JLkqzgdalp-Ym3txF6PGGb-5YcewM7f4/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMvZTlp/N2x4bjRrNDlzOGY2/NWw5MGEucG5n" alt="Harbor" width="745" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One of the most used base images is Alpine Linux (a security-oriented, lightweight Linux distribution),which is a "security-oriented, lightweight Linux distribution based on musl libc and busybox". Next I'll search for the &lt;a href="https://docs.docker.com/docker-hub/official_images/"&gt;official image&lt;/a&gt;:&lt;br&gt;
&lt;code&gt;docker search --format "{{.Name}}: {{.StarCount}}: {{.IsOfficial}}" alpine&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/1SyF_gerXW61K4E0oNfyWh6fwNaNUdTnzwFeO5_ZPD0/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMvN2xt/bXdmYzl4a2c4bm5m/anl6dTYucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/1SyF_gerXW61K4E0oNfyWh6fwNaNUdTnzwFeO5_ZPD0/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMvN2xt/bXdmYzl4a2c4bm5m/anl6dTYucG5n" alt="docker alpine" width="800" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Docker Official Images are a curated set of Docker repositories hosted on Docker Hub. I'm going to use &lt;a href="https://snyk.io/"&gt;Snyk&lt;/a&gt; to scan the &lt;code&gt;alpine&lt;/code&gt; image.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/YgwSauH1_cz47Ur7AuI8hBYcKV9NY7f_4nDMfyzK1-0/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMvbXA5/NjRzcjBmbnhscHRo/cDcwYW0ucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/YgwSauH1_cz47Ur7AuI8hBYcKV9NY7f_4nDMfyzK1-0/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMvbXA5/NjRzcjBmbnhscHRo/cDcwYW0ucG5n" alt="Snyk scan" width="800" height="255"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The scan found 18 issues most of which are Low severity vulnerabilities.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/5gHV3hMuYFa9NHPZnhrIuvdrMArvsGtGSTgLafvbWNA/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMvNmVs/anByZ29jNnpsMmp3/dHZucTAucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/5gHV3hMuYFa9NHPZnhrIuvdrMArvsGtGSTgLafvbWNA/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMvNmVs/anByZ29jNnpsMmp3/dHZucTAucG5n" alt="Vulnerabilites" width="800" height="540"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now I'll scan ironbank's &lt;code&gt;alpine&lt;/code&gt; image, and the result of the scan showed that accordingly to Snky's &lt;a href="https://security.snyk.io/"&gt;vulnerability database&lt;/a&gt;: &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;you are currently using the most secure version of the selected base image&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/yGqMEzYYkywnOMpByNgPsSVjkaKS791kX1RwQaMX6rM/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMvbmI4/NHhvY3JjYXQ2Nnoy/Z2M1eHcucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/yGqMEzYYkywnOMpByNgPsSVjkaKS791kX1RwQaMX6rM/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMvbmI4/NHhvY3JjYXQ2Nnoy/Z2M1eHcucG5n" alt="IronBank alpine image" width="800" height="226"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In conclusion I highly recommend Iron Bank container registry since is one of the central registries used by the U.S. Department of Defense, and is maintained by the &lt;a href="https://p1.dso.mil/"&gt;Platform One team&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/U-2xIckgY5o9A_QDt-22Dneq9NTmaHpewn_nG5-NaMc/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMvNHJo/MTNvcmpqNzV3ZW4w/bHdqdzEucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/U-2xIckgY5o9A_QDt-22Dneq9NTmaHpewn_nG5-NaMc/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMvNHJo/MTNvcmpqNzV3ZW4w/bHdqdzEucG5n" alt="USA" width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>secops</category>
      <category>tutorials</category>
      <category>containers</category>
    </item>
    <item>
      <title>Azure Kubernetes Service — behind the scenes</title>
      <dc:creator>Alexandru Dejanu</dc:creator>
      <pubDate>Mon, 22 Aug 2022 08:50:00 +0000</pubDate>
      <link>https://community.ops.io/dejanualex/azure-kubernetes-service-behind-the-scenes-1c6j</link>
      <guid>https://community.ops.io/dejanualex/azure-kubernetes-service-behind-the-scenes-1c6j</guid>
      <description>&lt;p&gt;If you’re using the managed Kubernetes offering from Microsoft also known as AKS you’re probably interested in the following.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AKS control plane&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Right off the bat if you do &lt;code&gt;kubectl get nodes&lt;/code&gt; you're not going to see any &lt;code&gt;control-plane,master&lt;/code&gt; nodes, because the Azure platform manages the AKS control plane, and you only pay for the AKS worker nodes that run your applications.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/t90m7EqLiuI1t9TAdkOI4RP7ATkxffWkwmI0QrUipyI/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL3V5aGpx/eWszNzFseXA3NDl3/ajF2LnBuZw" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/t90m7EqLiuI1t9TAdkOI4RP7ATkxffWkwmI0QrUipyI/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL3V5aGpx/eWszNzFseXA3NDl3/ajF2LnBuZw" alt="AKS cluster" width="504" height="211"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AKS provides a &lt;a href="https://docs.microsoft.com/en-us/azure/aks/concepts-clusters-workloads"&gt;single-tenant control plane&lt;/a&gt;, and its resources reside only in the region where you created the cluster. To limit access to cluster resources, you can configure the role-based access control (RBAC) (which is enabled by default), and optionally integrate your cluster with Azure AD.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AKS worker nodes, virtual machines and node pools&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When you create an AKS cluster, Azure sets up the Kubernetes control plane for you and will make one or more _VMSS _(virtual machine scale sets) as worker nodes, in your subscription. Alternatively, you can pay for a control plane that’s backed by an SLA.&lt;/p&gt;

&lt;p&gt;Azure virtual machine scale sets let you create and manage a group of load-balanced virtual machines. The VMSS can only manage virtual machines that are implicitly created based on the same virtual machine configuration model.&lt;/p&gt;

&lt;p&gt;According to your workloads, you can choose different virtual machines &lt;a href="https://azure.microsoft.com/en-us/pricing/details/virtual-machines/series/"&gt;flavors&lt;/a&gt;, some of them are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A-series (for entry-level workload best suited for dev/test environments)&lt;/li&gt;
&lt;li&gt;B-series (economical &lt;a href="https://docs.microsoft.com/en-us/azure/virtual-machines/sizes-b-series-burstable"&gt;burstable &lt;/a&gt;VMs)&lt;/li&gt;
&lt;li&gt;D-series (general purpose compute)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Going further in Azure Kubernetes Service, nodes of the same configuration are grouped together into node pools.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/scykInkmftAxrEXToXBrblKpvNlAe8wdknawUfc5T64/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL2MwMGxi/cnkwMzc0Mm9zd3Ft/dTd5LnBuZw" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/scykInkmftAxrEXToXBrblKpvNlAe8wdknawUfc5T64/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL2MwMGxi/cnkwMzc0Mm9zd3Ft/dTd5LnBuZw" alt="node pool" width="700" height="56"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Existing node pools can be scaled, upgraded, or if needed they can be deleted, but &lt;em&gt;don’t support resizing in place&lt;/em&gt;. Meaning vertical scaling (changing the VM “flavor”/size) is not supported and the alternative way to do this is to create a new node pool with a new target and either delete the previous one or just use multiple node pools.&lt;/p&gt;

&lt;p&gt;Another interesting piece of information is that in Azure Kubernetes Service &lt;a href="https://containerd.io/"&gt;containerd &lt;/a&gt;has become the default container runtime starting with Kubernetes 1.19.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/7Ev6caDPYgJ39IYiGEncguRJ3WBfgNZxbGtSoH1mTKQ/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzLzlrdWQ5/dGw3Nm01cWJlaTZ3/Y3JmLnBuZw" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/7Ev6caDPYgJ39IYiGEncguRJ3WBfgNZxbGtSoH1mTKQ/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzLzlrdWQ5/dGw3Nm01cWJlaTZ3/Y3JmLnBuZw" alt="container runtime" width="700" height="68"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AKS networking&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;By default, AKS clusters use &lt;a href="https://docs.microsoft.com/en-us/azure/aks/concepts-network#kubenet-basic-networking"&gt;kubenet &lt;/a&gt;networking option, and a virtual network and subnet are created for you.&lt;/p&gt;

&lt;p&gt;The cluster nodes get an IP address from a subnet in a Virtual Network. The pods running on those nodes get an IP address from an overlay network, which uses a different address space from the nodes. Pod-to-pod networking is enabled by NAT (Network Address Translation). The benefit of &lt;a href="https://docs.microsoft.com/en-us/azure/aks/concepts-network#kubenet-basic-networking"&gt;kubenet &lt;/a&gt;is that only nodes consume an IP address from the cluster subnet.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AKS deployment&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There’re multiple ways to create an Azure Kubernetes Service, using Azure Portal, Azure CLI, Azure Resource Manager templates, or even Terraform.&lt;/p&gt;

&lt;p&gt;To deploy an AKS cluster you must have a resource group to manage the resources consumed by the cluster.&lt;/p&gt;

&lt;p&gt;An important aspect is that an AKS deployment spans two resource groups, 1st group contains only the Kubernetes service resources, and the 2nd resource group (node resource group, name starting with MC_) contains all of the infrastructure resources associated with the cluster (k8s nodes VMSSS, virtual networking, and storage).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/M8Q1cYhtVpahibBrg1UvyyE0jm01xVfkVFZNj3euN9I/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL2Y5MDQ4/MDlzZWwxb3N1bm9p/MWpkLnBuZw" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/M8Q1cYhtVpahibBrg1UvyyE0jm01xVfkVFZNj3euN9I/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzL2Y5MDQ4/MDlzZWwxb3N1bm9p/MWpkLnBuZw" alt="Resource groups" width="392" height="68"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Closing notes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In conclusion, AKS simplifies deploying a managed Kubernetes cluster by offloading the operational tasks to Azure, but you always should be aware of the tradeoffs that come with it. If you liked this article you can also read about 🔎 &lt;a href="https://faun.pub/probing-kubernetes-architecture-c600a95d25b2"&gt;probing Kubernetes architecture&lt;/a&gt; or even 📚 a &lt;a href="https://faun.pub/tale-of-a-kubernetes-upgrade-7a08e5d5528a"&gt;tale of a Kubernetes upgrade&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>cloudops</category>
      <category>azure</category>
    </item>
    <item>
      <title>Kubernetes disaster recovery</title>
      <dc:creator>Alexandru Dejanu</dc:creator>
      <pubDate>Thu, 11 Aug 2022 12:17:00 +0000</pubDate>
      <link>https://community.ops.io/dejanualex/kubernetes-disaster-recovery-2ka1</link>
      <guid>https://community.ops.io/dejanualex/kubernetes-disaster-recovery-2ka1</guid>
      <description>&lt;p&gt;Disaster recovery is not an easy job, because applications are not deployed on a "fixed" server/machine and also Kubernetes workloads cannot be backed up in a traditional manner.&lt;/p&gt;

&lt;p&gt;We're going to talk about how can we maintain/regain the use of stateful components, after a disaster occurred. By disaster we mean a non-graceful shutdown of Kubernetes nodes. If you're not familiar with Kubernetes architecture, check &lt;a href="https://dejanualexandru.medium.com/probing-kubernetes-architecture-c600a95d25b2"&gt;Probing K8s Architecture&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Non-graceful shutdown?&lt;/strong&gt;&lt;/em&gt; &lt;br&gt;
 The situation in which the node shutdown action takes place without the kubelet daemon (on each worker node) knowing about it, and as a result the pods on that node also shut down ungracefully.&lt;/p&gt;

&lt;p&gt;For &lt;strong&gt;stateless&lt;/strong&gt; workloads, that's often not a problem but for &lt;strong&gt;stateful&lt;/strong&gt; ones(e.g. controlled by StatefulSet) it might imply that the pods on that specific node will also shut down ungracefully. &lt;/p&gt;

&lt;p&gt;One example of stateful workloads is applications that need persistence across Pod (re)scheduling a.k.a persistent storage, e.g: &lt;a href="https://opensearch.org/"&gt;Opensearch&lt;/a&gt;, &lt;a href="https://www.rabbitmq.com/"&gt;Rabbit&lt;/a&gt;, &lt;a href="https://redis.io/"&gt;Redis&lt;/a&gt;, &lt;a href="https://goharbor.io/"&gt;Habor&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;So we can be in the situation in which a Pod might be stuck indefinitely in &lt;code&gt;Terminating&lt;/code&gt;and the StatefulSet cannot create a replacement for the pod because the pod still exists in the cluster (maybe on a failed node), resulting in the fact that the application running on StatefulSet may be degraded or even offline.&lt;/p&gt;

&lt;p&gt;Lucky for us Kubernetes v1.24 introduces alpha support for Non-Graceful Node Shutdown. This feature allows stateful workloads to failover to a different node after the original node is shutdown or in a non-recoverable state such as hardware failure or broken OS. Even more Kubernetes team plans to push the Non-Graceful Node Shutdown implementation to Beta in either 1.25 or 1.26.&lt;/p&gt;

&lt;p&gt;In this particular case, we have &lt;code&gt;NodeOutOfServiceVolumeDetach&lt;/code&gt; feature which currently is in &lt;a href="https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/#feature-stages"&gt;Alpha&lt;/a&gt;(meaning disabled by default).In order to use it we need to enable it using &lt;code&gt;--feature-gates&lt;/code&gt; command line flag the &lt;strong&gt;kube-controller-manager&lt;/strong&gt; component, which is in &lt;code&gt;kube-system&lt;/code&gt; namespace: &lt;code&gt;kubectl get pods -n kube-system | grep controller-manager&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Note: After enabling the aforementioned feature, we can apply the out-of-service taint &lt;code&gt;kubectl taint nodes &amp;lt;node-name&amp;gt; node.kubernetes.io/out-of-service=nodeshutdown:NoExecute&lt;/code&gt;, only after the node has been powered off.&lt;/p&gt;

&lt;p&gt;As a result, the PersistentVolumes attached to the shutdown node will be detached, and for StatefulSets, replacement pods will be created successfully on a different running node. Last but not least we need to manually remove the &lt;code&gt;out-of-service&lt;/code&gt; taint after the pods are moved to a new node and the user has checked that the shutdown node has been recovered since the user was the one who originally added the taint.&lt;/p&gt;

&lt;p&gt;.&lt;/p&gt;

</description>
      <category>cloudops</category>
      <category>kubernetes</category>
      <category>devops</category>
    </item>
    <item>
      <title>Ingress in one minute</title>
      <dc:creator>Alexandru Dejanu</dc:creator>
      <pubDate>Thu, 30 Jun 2022 09:56:32 +0000</pubDate>
      <link>https://community.ops.io/dejanualex/ingress-in-one-minute-3i00</link>
      <guid>https://community.ops.io/dejanualex/ingress-in-one-minute-3i00</guid>
      <description>&lt;p&gt;Once your services have been established in Kubernetes, most probably you want to &lt;strong&gt;get external traffic into your cluster&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
This can be achieved using the &lt;a href="https://kubernetes.io/docs/concepts/services-networking/service/"&gt;Service&lt;/a&gt; Kubernetes object OR &lt;a href="https://kubernetes.io/docs/concepts/services-networking/ingress/"&gt;Ingress&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;"Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;What the latter means, is you need an Ingress &lt;a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/"&gt;controller &lt;/a&gt;and some Ingress rules(routing rules to manage external users' access to the services in the cluster) which will be passed to the controller.&lt;/p&gt;

&lt;p&gt;⚠️ You must have an Ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/427DopRFCZN5SAiL02SNXN_jalcmzWOwWHdb1QL0EXg/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzLzRjeGdq/cTJ6aWhtcm9lbGww/Y205LnBuZw" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/427DopRFCZN5SAiL02SNXN_jalcmzWOwWHdb1QL0EXg/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9jb21t/dW5pdHkub3BzLmlv/L3JlbW90ZWltYWdl/cy91cGxvYWRzL2Fy/dGljbGVzLzRjeGdq/cTJ6aWhtcm9lbGww/Y205LnBuZw" alt="Image description" width="772" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ingress controllers come in various flavors with different capabilities: Istio, Google Cloud Load Balancer, Nginx, Contour...more ingress controllers &lt;a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In conclusion, Ingress represents a single entry point to your cluster that &lt;strong&gt;routes&lt;/strong&gt; the request to different services and acts as a &lt;strong&gt;reverse proxy&lt;/strong&gt;...usually what happens in the real world it's that the request "hits" a service-mesh via Ingress, a.k.a Ingress through Istio gateways 🤯.&lt;/p&gt;

</description>
      <category>ingress</category>
      <category>kubernetes</category>
      <category>tutorials</category>
      <category>devops</category>
    </item>
    <item>
      <title>kubectl context like a pro</title>
      <dc:creator>Alexandru Dejanu</dc:creator>
      <pubDate>Fri, 03 Jun 2022 10:32:07 +0000</pubDate>
      <link>https://community.ops.io/dejanualex/kubectl-context-like-a-pro-2692</link>
      <guid>https://community.ops.io/dejanualex/kubectl-context-like-a-pro-2692</guid>
      <description>&lt;p&gt;If your work involves working with multiple Kubernetes clusters, then this might be for you.&lt;/p&gt;

&lt;p&gt;☸️ A file that is used to configure access to a cluster is called &lt;a href="https://kubernetes.io/docs/reference/kubectl/kubectl/" rel="noopener noreferrer"&gt;kubeconfig&lt;/a&gt;(usually placed at &lt;code&gt;~/.kube/config&lt;/code&gt;), but you can easily override the location by using &lt;code&gt;--kubeconfig=&amp;lt;path_to_config&amp;gt;&lt;/code&gt; flag or using &lt;code&gt;KUBECONFIG&lt;/code&gt; environment variable &lt;code&gt;export KUBECONFIG=&amp;lt;path/to/your_kubeconfig&amp;gt;&lt;/code&gt;. A Kubernetes config file describes &lt;strong&gt;clusters&lt;/strong&gt;, &lt;strong&gt;users&lt;/strong&gt;, and &lt;strong&gt;contexts&lt;/strong&gt;. You can use multiple contexts to target different Kubernetes clusters.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Config&lt;/span&gt;
&lt;span class="na"&gt;preferences&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt;

&lt;span class="na"&gt;clusters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;cluster&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;cluster_name&amp;gt;&lt;/span&gt;
&lt;span class="nn"&gt;...&lt;/span&gt;

&lt;span class="na"&gt;users&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;user_name&amp;gt;&lt;/span&gt;
&lt;span class="nn"&gt;...&lt;/span&gt;

&lt;span class="na"&gt;contexts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;context_name&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;You can render your current config: &lt;code&gt;kubectl config view&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Without further ado, let's take the example of merging two config files(one for each cluster):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# create backup for current config
cp ~/.kube/config ~/.kube/config.bak

# merge the 2 configs
KUBECONFIG=~/.kube/config:/path/to/new/config kubectl config view --flatten &amp;gt; ~/intermediate_config

# replace the current config with the intermediate_config
mv ~/intermediate_config ~/.kube/config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A &lt;strong&gt;context&lt;/strong&gt; is a combination of a cluster and user credentials.You can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;List contexts: &lt;code&gt;kubectl config get-contexts&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Check the current context: &lt;code&gt;kubectl config current-context&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Switch to desired context: &lt;code&gt;kubectl config use-context &amp;lt;context_name&amp;gt;&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Check defined clusters: &lt;code&gt;kubectl config get-clusters&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;⚠️ Running &lt;code&gt;aws eks update-kubeconfig --region &amp;lt;us-east-1&amp;gt; --name &amp;lt;cluster_name&amp;gt;&lt;/code&gt; pulls your cluster's API endpoint and CA data to create/update a context in your local &lt;code&gt;kubeconfig&lt;/code&gt;. This automatically sets that cluster as your active target, so any immediate kubectl commands will execute against it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;aws eks update-kubeconfig &lt;span class="nt"&gt;--region&lt;/span&gt; us-east-1 &lt;span class="nt"&gt;--name&lt;/span&gt; demo-eks
...
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl config current-context
arn:aws:eks:us-east-1:255656399702:cluster/demo-eks
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>kubernetes</category>
      <category>kubectl</category>
      <category>tutorials</category>
      <category>devops</category>
    </item>
    <item>
      <title>kubectl output like a pro</title>
      <dc:creator>Alexandru Dejanu</dc:creator>
      <pubDate>Mon, 30 May 2022 11:09:09 +0000</pubDate>
      <link>https://community.ops.io/dejanualex/kubectl-output-like-a-pro-509b</link>
      <guid>https://community.ops.io/dejanualex/kubectl-output-like-a-pro-509b</guid>
      <description>&lt;p&gt;If your work involves &lt;a href="https://kubernetes.io/docs/reference/kubectl/"&gt;kubectl&lt;/a&gt;, even more wrapping &lt;a href="https://kubernetes.io/docs/reference/kubectl/"&gt;kubectl &lt;/a&gt;commands in bash scripts, then this might be for you.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.ops.io/images/KhlTRFsgz3ZQN-Rkhtb5mhFCylYcqMa4tN-nUB8W4h8/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMvaHB1/dWlsMWljYjhsMXF5/dTRvcDAucG5n" class="article-body-image-wrapper"&gt;&lt;img src="https://community.ops.io/images/KhlTRFsgz3ZQN-Rkhtb5mhFCylYcqMa4tN-nUB8W4h8/rt:fit/w:800/g:sm/q:0/mb:500000/ar:1/aHR0cHM6Ly9kZXYt/dG8tdXBsb2Fkcy5z/My5hbWF6b25hd3Mu/Y29tL3VwbG9hZHMv/YXJ0aWNsZXMvaHB1/dWlsMWljYjhsMXF5/dTRvcDAucG5n" alt="Image description" width="800" height="102"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So let's say you want to take only the pods &lt;strong&gt;names&lt;/strong&gt; for &lt;strong&gt;all namespaces&lt;/strong&gt;. You can use things la AWK, grep but also you can customize your output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get po &lt;span class="nt"&gt;-o&lt;/span&gt; name &lt;span class="nt"&gt;-A&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Pretty simple, and it looks like this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;pod/harbor-jobservice-5f56f77c85-75fzj&lt;br&gt;
pod/harbor-nginx-858bb8887f-8grwb&lt;br&gt;
pod/harbor-portal-59ff88c985-2k878&lt;br&gt;
pod/harbor-redis-0&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you don't like that fact that pod's names are prefixed with &lt;code&gt;pod/&lt;/code&gt;...say no more:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get po &lt;span class="nt"&gt;-o&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;custom-columns&lt;span class="o"&gt;=&lt;/span&gt;NAME:.metadata.name &lt;span class="nt"&gt;-A&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;harbor-jobservice-5f56f77c85-75fzj&lt;br&gt;
harbor-nginx-858bb8887f-8grwb&lt;br&gt;
harbor-portal-59ff88c985-2k878&lt;br&gt;
harbor-redis-0&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Would be nice though to know the namespace in which each pod is running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get po &lt;span class="nt"&gt;-A&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; go-template&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{{range .items}} --&amp;gt; {{.metadata.name}} in namespace: {{.metadata.namespace}}{{"\n"}}{{end}}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;--&amp;gt; harbor-jobservice-5f56f77c85-75fzj in namespace: harbor&lt;br&gt;
 --&amp;gt; harbor-nginx-858bb8887f-8grwb in namespace: harbor&lt;br&gt;
 --&amp;gt; harbor-portal-59ff88c985-2k878 in namespace: harbor&lt;br&gt;
 --&amp;gt; harbor-redis-0 in namespace: harbor&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That's about it, also this applies for &lt;a href="https://www.docker.com/"&gt;Docker &lt;/a&gt; check this &lt;a href="https://faun.pub/docker-101-format-command-output-8e2673f083af"&gt;article&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>kubectl</category>
      <category>docker</category>
      <category>tutorials</category>
    </item>
  </channel>
</rss>
