The Ops Community ⚙️

Javier Marasco
Javier Marasco

Posted on

Managed cluster vs unmanaged clusters in Kubernetes

As we see in the latest article here a Kubernetes cluster can contain a lot of components just to function properly and as you can imagine, install, maintain, update, upgrade and operate such an infrastructure might be haunting.

What does it look like to build a Kubernetes cluster?

As discussed there are two kinds of nodes for our cluster, the master node also known as control nodes, and the worker nodes. Each of those nodes contains different components that will be supporting our cluster, a minimum configuration will have one control node and one worker node, which can be physical machines or virtual machines connected by a network.

The control nodes will run the API server, an etcd database, the scheduler, and some controllers while the worker nodes will run only tree components being them the kubelet, kube-proxy, and the container runtime.

This might sound simple, right? but then you need to configure all those components, you also need to do all the networking needed for those nodes to communicate, and in the networking part there is one additional thing, you will (eventually) need to expose your applications to the internet using an ingress controller which will require to have an external IP exposed to the internet from where all the incoming traffic to your applications will ingress your cluster, this means you will need to administer the creation (and maintenance) of those public IPs.

This becomes a complex task pretty quick but if this is so complicated, why would anyone want to build a cluster for themselves? well, the answer is pretty easy, security when you build your cluster, you keep control of everything, patching, kernel versions, os versions, packages in the nodes, everything is possible to customize to your needs. It might sound like overkill but think in heavily regulated industries like the financial sector, there you would like to have control over every piece of your infrastructure, you don't want to have a bug in a library running in the OS of your worker node that allows malicious actors to exploit it and gain root in one of your pods.

Managed clusters and the cloud providers offerings

So, we are not working in a highly regulated industry but we do need one or more clusters and we don't want to spend months learning how to build a cluster and then need to maintain them manually, what can we do?

Luckily each cloud provider (at least the most popular ones) does offer automated ways to build a cluster for you and give you access to it to simply deploy resources into them. This is very convenient for most of the scenarios where a Kubernetes cluster is needed, even for production workloads, those cloud providers have (normally) multiple levels of SLAs for those clusters meaning that if you are running a development cluster you can opt for a lower SLA but also a cheaper billing while for production workloads you can opt for a higher SLA which includes highly available resources but that will also cost you more, but you can choose what is best for your case and environment.

Those managed clusters have a clear downside, and that is the fact that the cloud providers will retain the control of the control plane and their control nodes meaning you can't decide on how to configure it and even when it is possible to adjust and tune your worker nodes (even apps, libraries and kernel configurations) as soon as you modify them, you loose support from the cloud provider. This might sound like a bad move from the providers but think what would it mean for them to support every possible change made in any cluster in the world? it would be impossible for them to provide proper support so they stick to a proven configuration and they support it, if you move away from that you are on your own.

Patching and upgrading your cluster

We discussed a lot about building and configuring your cluster, but what about version upgrades? what happens when Kubernetes releases a new version? well, here is again a big difference, cloud providers tend to be a bit delayed with the latest version being released because they need to test it first, pass certain internal validations, and then expose them for you to consume in a trustable manner, this takes time meaning if you are wanting to implement a super shiny new feature of the latest release of Kubernetes and you are running a managed cluster you will need to wait a bit before this version is available for you to pick it up, while in a managed cluster you can upgrade whenever you want.

But what happens if something goes wrong with my upgrade? I moved to a newer version and now everything is broken (see the note after this paragraph for a good example). In a managed cluster it is very complicated to revert to an older Kubernetes version I would say it is impossible but I actually know cases where the cloud provider managed to revert the version in VERY specific cases but I would take this as "something you can't do" but in your unmanaged cluster it is as simple as just reinstall the older version or revert to a backup snapshot from before your upgrade (because of course you do backups before upgrading your cluster version) and you are again in a case like nothing happened, very easy.

A note about this, recently with the introduction of version 1.25 there was a change in a configuration called cgroups in the kernel of the OS being used by Kubernetes, this caused some applications using an old (but still functional) version of their frameworks to start consuming a lot of memory and making the kubelet constantly kill those pods, a simple solution is to upgrade your framework in your app, but if that is not possible you can also revert the change in the worker node kernel, but this last option will make you unsupported for your cloud provider.

So, what is the takeaway of this article?

Essentially you now know what a managed and unmanaged cluster, you know what are the advantages and disadvantages of each one, and also you know what it means to have a managed cluster and what can you do with it.

The main point here is to know your situation, know your use case and know what you expect from Kubernetes and your cluster/s. Will you need to meet very strict standards and provide proof that you have certain configurations? then you are tied to an unmanaged cluster, you will need to learn and understand the details of Kubernetes and manage all by yourself, which might sound haunting but with time and training, you will be perfectly fine.
If on the other hand, you will be deploying applications that you know are secure, you don't need to provide configurations to audits and you can rely on a cloud provider having control over your control plane, you are more than fine with a managed cluster and you can skip the deep part of Kubernetes for now (it is always advisable to learn the concepts anyway, but you will not be rushing to learn and understand everything because you need to build you cluster quickly).

So what are your thoughts? would you prefer a managed cluster? an unmanaged cluster? would you feel comfortable knowing your control plane is not under your control? Let me know in the comments box 👍

Thank you for reading 📚 !

Top comments (0)