Three years ago, I wrote a blogpost on why you wouldn’t want to run Kubernetes on bare-metal. VMware released a number of platform enhancements over these years and there is a lot of updated material and feedback – also coming from customers. So what are (my personal) reasons to run containers and Kubernetes (short “K8s”) on a virtual infrastructure & vSphere in particular?
Operations: Running multiple clusters on bare-metal is hard
- Multiple clusters in a virtual environment are a lot easier and each cluster can leverage e.g. it‘s own lifecycle policies (e.g. for K8s version upgrades) instead of forcing one bare-metal cluster to upgrade. Running multiple Kubernetes versions side-by-side might be already or become a requirement in the near future.
- It also makes lots of sense to run Kubernetes side-by-side with your existing VMs instead of building a new hardware silo and operational complexity
- VMware’s compute platform vSphere is the de-facto standard for datacenter workloads in companies across industries and operational experience and resources are available across the globe. Bare-metal operations typically introduces new risks and operational complexity.
Availability/Resilience and Quality of service: you can plan for failures without compromising density
- Virtual K8s clusters could benefit even in „two physical datacenter” scenarios where the underlying infrastructure is spread across both sites. A “stretched” platform (e.g. vSphere with vSAN Stretched Cluster) allows you to run logical three-node Kubernetes control planes in VMs and protect the control plane and workload nodes using vSphere HA.
- vSphere also allows you to prioritize workloads by configuring policies (networking, storage, compute, memory) that will also be enforced during outages (Network I/O Control, Storage I/O Control, Resource Pools, Reservations, Limits, HA Restart Priorities, …)
- Restart a failed or problematic Kubernetes node VM before Kubernetes itself even detects a problem.
- Provide the Kubernetes control plane availability by utilizing mature heartbeat and partition detection mechanisms in vSphere to monitor servers, Kubernetes VMs, and network connectivity to enable quick recovery.
- Prevent service disruption and performance impacts through proactive failure detection, live migration (vMotion) of VMs, automatic load balancing, restart-due-to-infrastructure failures, and highly available storage
Resource fragmentation, overhead & capacity management: single-purpose usage of hardware resources vs. multi-purpose platform
- Running Kubernetes clusters virtually and using VMware DRS to balance these clusters across vSphere hosts allows the deployment of multiple K8s cluster on the same hardware setup and increasing utilization of hardware resources
- When running multiple K8s clusters on dedicated bare-metal hosts, you lose the overall capability to utilize hardware resources across the infrastructure pool
- Many environments won‘t be able to (quickly) repurpose existing capacity from one bare-metal host in one cluster to another cluster in a short timeframe
- From a vSphere perspective, Kubernetes is yet another set of VMs and capacity management can be done across multiple Kubernetes clusters; it gets more efficient the more clusters you run
- Deep integrations with existing operational tools like vRealize Operations allow operational teams to deliver Kubernetes with confidence
- K8s is only a Day-1 scheduler and does not perform resource balancing based on running pods
- In case of imbalance on the vSphere layer, vSphere DRS rebalances K8s node VMs across the physical estate to better utilize the underlying cluster and delivers best-of-both-worlds from a scheduling perspective
- High availability and „stand-by“ systems are cost intensive in bare-metal deployments, especially in edge scenarios: in order to provide some level of redundancy, some spare physical hardware capacity (servers) need to be available. In worst case you need to reserve capacity per cluster which increases physical overhead (CAPEX and OPEX) per cluster.
- vSphere allows you to share failover capacity incl. incl strict admission control to protect important workloads across Kubernetes clusters because the VMs can be restarted and reprioritized e.g. based on the scope of a failure
Single point of integration with the underlying infrastructure
- A programmable, Software-Defined Datacenter: Infrastructure as Code allows to automate all the things on an API-driven datacenter stack
- Persistent storage integration would need to be done for each underlying storage architecture individually, running K8s on vSphere allows to leverage already abstracted and virtualized storage devices
- Monitoring of hardware components is specific to individual hardware choices, vSphere offers an abstracted way of monitoring across different hardware generations and vendors
Security & Isolation
- vSphere delivers hardware-level isolation at the Kubernetes cluster, namespace, and even pod level
- VMware infrastructure also enables the pattern of many smaller Kubernetes clusters, providing true multi-tenant isolation with a reduced fault domain. Smaller clusters reduce the blast radius, i.e. any problem with one cluster only affects the pods in that small cluster and won’t impact the broader environment.
- In addition, smaller clusters mean each developer or environment (test, staging, production) can have their own cluster, allowing them to install their own CRDs or operators without risk of adversely affecting other teams.
Credits and further reading
- The content above is mainly a summary of existing materials & personal observations (summarized by me) and based on lots of pre-work, VMworld presentations and whitepapers from colleagues like Michael Gasch, Frank Denneman, Kit Colbert, Kenny Coleman, Robert Guske, Robbie Jerrom and many more!
- There has been a ton of material published around this topic recently (and some awesome foundational work by Michael Gasch incl. his KubeCon talk), I want to list a few of the public resources here:
- Why Choose VMware Virtualization for Kubernetes and Containers (Blogpost, January 2021)
- vSphere with Tanzu Supports 6.3 Times More Container Pods than Bare Metal (Blogpost, August 2021)
- Full Study/Paper (PDF, August 2021)
- Kubernetes Resource Management for vSphere Admins (Blogpost and VMworld video, November 2019)
- The Value of vSphere in a Kubernetes World (Blogpost)
- Containers on Virtual Machines or Bare-Metal? (Whitepaper)
- Performance of Enterprise Web Applications in Docker Containers on VMware vSphere 6.5 (Blogpost and link to Whitepaper)
- VMs and Containers – Friends or Enemies (Slidedeck by Simone Morellato)
- VMworld 2018: The Value of Running Kubernetes on vSphere (video) (shout out to my friends Michael Gasch and Frank Denneman)
#vK8s – friends don’t let friends run Kubernetes on bare-metal
So, no matter what your favorite Kubernetes framework is these days – I am convinced it runs best on a virtual infrastructure and of course even better on vSphere. Friends don’t let friends run Kubernetes on bare-metal. And what hashtag could summarize this better than something short and crips like #vK8s ? I liked this idea so much that I created some “RUN vK8s” images (inspired by my colleagues Frank Denneman and Duncan Epping – guys, it’s been NINE years since RUN DRS!) that I want to share with all of you. You can find the repository on GitHub – feel free to use them whereever you like.