A source of some interesting discussions at work is whether or not Kubernetes nodes should be virtualized. The thesis behind why this is not a good idea, is the fact that a virtualized layer adds needless complexity to your infrastructure stack for what superficially appears to be no added value. Note that the concepts discussed in this post apply to OpenShift also.
Pros For Kubernetes On Virtualized Infrastructure
- Unified Infrastructure Control and Administration Plane
If your organization has made a heavy investment in a software defined infrastructure via VMware or Hyper-V, virtualizing Kubernetes cluster nodes is compelling and fosters a unified control/management infrastructure plane. - Increased Security via Defense in Depth
When it comes to security, the more defensive layers you have, the harder it is for any would-be attacker to penetrate your infrastructure. Virtualized Kubernetes/OpenShift nodes adds to your security defense-in-depth.
Addendum
A full discussion on Kubernetes security is beyond the scope of this blog post. However, the Mitre Att&ck Framework provides a comprehensive matrix of security attack patterns. Microsoft have produced a similar style of matrix to cover Kubernetes in this blog. As per the blog, resource hijacking and lateral movement have ramifications for multi-tenant platforms and Kubernetes application delivery techniques via things such as GitOps – where you may have one Kubernetes cluster per code branch. Putting nodes in their own virtual machines, provides an extra layer of defense that can reduce the impact of pods that might become malicious as the result of an attack. VMware vSphere 7.0 (more on this later) takes this concept further by running each pod in its own light weight virtual machine. - Ease and Speed Of Provisioning
Tools such as PowerCLI (PowerShell module for VMware) or Terraform make spinning up virtualized infrastructure far faster than spinning up physical tin. Spinning up virtualized nodes via Terraform and then deploying Kubernetes on top of these via Kubespray is an incredibly popular means of carrying out
end-to-end deployments of Kubernetes. - Accommodating Master Nodes
Master nodes – control plane nodes to use OpenShift parlance, are incredibly light weight in terms of memory and CPU requirements, to the extent that its difficult to purchase servers small enough to accommodate such a nodes on their own. In my lab, master nodes use as little as 4GB of memory and 2 logical CPUs. A master node can be created on a physical server and tainted such that user pods can run on it – but this is not a recommended practice . In this specific scenario it makes perfect sense to virtualize master / control plane nodes. - Worker Node Size Performance Implications
An excessive number of pods running on a worker node can put it into a non-ready state – when kubelet health checks take too long to complete due to the number of containers the kubelet needs to iterate through. For this very reason, a maximum of 100 pods per worker node is recommended. Consider organizations that use standard size server across their infrastructure, a four socket forty core servers for example. And with the latest generation of processors, such as AMD’s Epyc, it is now possible to get more than forty cores in a single CPU socket. In order to avoid nodes experiencing excessive non-ready state events, it might be desirable to carve physical servers up into smaller servers via virtualization. This blog post goes into the implications of node sizes in greater depth.
Cons For Kubernetes
On Virtualized Infrastructure
- Performance Overhead
Virtualized infrastructure adds a minor performance overhead to any application that runs on top of it. To quantify minor – this is is in the couple of percent of range and is only really a concern for organizations interested in extracting every last drop of performance out of their infrastructure – think of things such as low latency trading platforms, online gaming platforms etc . . . - Management Overhead
This is probably not a concern for anyone that already has a virtualized infrastructure. It might be a concern for organizations wanting to build out an infrastructure from scratch. Simply put, more layers in your infrastructure equals more layers to manage. - Cost
There is a cost associated with licensing commercial enterprise grade virtualization software, this may (or may not be an issue) for your organization.
So What Is The Correct Answer ?
This really depends on where you are starting out from. If your organization already uses virtualized infrastructure, it makes a certain amount of sense to virtualize your Kubernetes cluster nodes, the same applies if you want to spin up something fast to “Dip your toes in the water”. For building out a green field infrastructure at organizations that have not made any significant investment in virtualization software, deploying worker nodes to bare metal and master and control plane nodes virtualized via KVM or Hyper-V might be the way to go, in order to keep virtualization costs down.
. . . And Along Came VMware Project Pacific
VMware vSphere 7.0 ushers in the ability to create “Tanzu Kubernetes Grid” clusters (Kubernetes clusters) directly on top of the VMware hypervisor. Despite the fact that cluster nodes are still underpinned by virtual machines, this eases the management overhead of having a virtualization layer between your bare metal and Kubernetes clusters.
One thought on “The Great Kubernetes Virtualization Debate”