Podcast: VMUG UserCon Germany live

🇩🇪: Ich hatte es bei der VMUG in unterschiedlichen Situationen auch immer wieder gesagt: es gibt eigentlich keinen besseren Zeitpunkt mit dem Erstellen von neuem Content und dem Teilen mit der Community anzufangen, als jetzt. Also habe ich mein portables Podcast Studio (mein Mikrofon) eingepackt und mich in Frankfurt vor Ort ganz spontan mit einigen Besuchern der VMUG UserCon (und TAM Summit) unterhalten und so ein paar live Eindrücke eingesammelt.

Ein herzliches Dankeschön an alle Gesprächspartnerinnen und Gesprächspartner, die spontan und völlig unvorbereitet einfach mitgemacht haben! Das hat echt Spaß gemacht!

Viel Spaß beim Anhören und ich freue mich immer über Feedback!

🇺🇸/🇬🇧: I’ve said it repeatedly in various situations at the VMUG: there’s really no better time to start creating new content and sharing it with the community than now. So, I packed my portable podcast studio (my microphone) and spontaneously spoke with some attendees of the VMUG UserCon (and TAM Summit) on site in Frankfurt to gather a few live impressions.

A heartfelt thank you to all the conversation partners who joined in so spontaneously and completely unprepared! It was really a lot of fun!

#vK8s 2021 edition – friends don’t let friends run Kubernetes on bare-metal

Three years ago, I wrote a blogpost on why you wouldn’t want to run Kubernetes on bare-metal. VMware released a number of platform enhancements over these years and there is a lot of updated material and feedback – also coming from customers. So what are (my personal) reasons to run containers and Kubernetes (short “K8s”) on a virtual infrastructure & vSphere in particular?

Operations: Running multiple clusters on bare-metal is hard

  • Multiple clusters in a virtual environment are a lot easier and each cluster can leverage e.g. it‘s own lifecycle policies (e.g. for K8s version upgrades) instead of forcing one bare-metal cluster to upgrade. Running multiple Kubernetes versions side-by-side might be already or become a requirement in the near future.
  • It also makes lots of sense to run Kubernetes side-by-side with your existing VMs instead of building a new hardware silo and operational complexity
  • VMware’s compute platform vSphere is the de-facto standard for datacenter workloads in companies across industries and operational experience and resources are available across the globe. Bare-metal operations typically introduces new risks and operational complexity.

Availability/Resilience and Quality of service: you can plan for failures without compromising density

  • Virtual K8s clusters could benefit even in „two physical datacenter” scenarios where the underlying infrastructure is spread across both sites. A “stretched” platform (e.g. vSphere with vSAN Stretched Cluster) allows you to run logical three-node Kubernetes control planes in VMs and protect the control plane and workload nodes using vSphere HA.
  • vSphere also allows you to prioritize workloads by configuring policies (networking, storage, compute, memory) that will also be enforced during outages (Network I/O Control, Storage I/O Control, Resource Pools, Reservations, Limits, HA Restart Priorities, …)
    • Restart a failed or problematic Kubernetes node VM before Kubernetes itself even detects a problem.
    • Provide the Kubernetes control plane availability by utilizing mature heartbeat and partition detection mechanisms in vSphere to monitor servers, Kubernetes VMs, and network connectivity to enable quick recovery.
    • Prevent service disruption and performance impacts through proactive failure detection, live migration (vMotion) of VMs, automatic load balancing, restart-due-to-infrastructure failures, and highly available storage

Resource fragmentation, overhead & capacity management: single-purpose usage of hardware resources vs. multi-purpose platform

  • Running Kubernetes clusters virtually and using VMware DRS to balance these clusters across vSphere hosts allows the deployment of multiple K8s cluster on the same hardware setup and increasing utilization of hardware resources
  • When running multiple K8s clusters on dedicated bare-metal hosts, you lose the overall capability to utilize hardware resources across the infrastructure pool
    • Many environments won‘t be able to (quickly) repurpose existing capacity from one bare-metal host in one cluster to another cluster in a short timeframe
  • From a vSphere perspective, Kubernetes is yet another set of VMs and capacity management can be done across multiple Kubernetes clusters; it gets more efficient the more clusters you run
    • Deep integrations with existing operational tools like vRealize Operations allow operational teams to deliver Kubernetes with confidence
  • K8s is only a Day-1 scheduler and does not perform resource balancing based on running pods
    • In case of imbalance on the vSphere layer, vSphere DRS rebalances K8s node VMs across the physical estate to better utilize the underlying cluster and delivers best-of-both-worlds from a scheduling perspective
  • High availability and „stand-by“ systems are cost intensive in bare-metal deployments, especially in edge scenarios: in order to provide some level of redundancy, some spare physical hardware capacity (servers) need to be available. In worst case you need to reserve capacity per cluster which increases physical overhead (CAPEX and OPEX) per cluster.
    • vSphere allows you to share failover capacity incl. incl strict admission control to protect important workloads across Kubernetes clusters because the VMs can be restarted and reprioritized e.g. based on the scope of a failure

Single point of integration with the underlying infrastructure

  • A programmable, Software-Defined Datacenter: Infrastructure as Code allows to automate all the things on an API-driven datacenter stack
  • Persistent storage integration would need to be done for each underlying storage architecture individually, running K8s on vSphere allows to leverage already abstracted and virtualized storage devices
  • Monitoring of hardware components is specific to individual hardware choices, vSphere offers an abstracted way of monitoring across different hardware generations and vendors

Security & Isolation

  • vSphere delivers hardware-level isolation at the Kubernetes cluster, namespace, and even pod level
  • VMware infrastructure also enables the pattern of many smaller Kubernetes clusters, providing true multi-tenant isolation with a reduced fault domain. Smaller clusters reduce the blast radius, i.e. any problem with one cluster only affects the pods in that small cluster and won’t impact the broader environment.
  • In addition, smaller clusters mean each developer or environment (test, staging, production) can have their own cluster, allowing them to install their own CRDs or operators without risk of adversely affecting other teams.

Credits and further reading

#vK8s – friends don’t let friends run Kubernetes on bare-metal

So, no matter what your favorite Kubernetes framework is these days – I am convinced it runs best on a virtual infrastructure and of course even better on vSphere. Friends don’t let friends run Kubernetes on bare-metal. And what hashtag could summarize this better than something short and crips like #vK8s ? I liked this idea so much that I created some “RUN vK8s” images (inspired by my colleagues Frank Denneman and Duncan Epping – guys, it’s been NINE years since RUN DRS!) that I want to share with all of you. You can find the repository on GitHub – feel free to use them whereever you like. 

VMware Project Pacific – collection of materials

Blogposts:

VMworld US 2019:

VMworld Europe 2019 sessions:

  • HBI1452BE – Project Pacific: Supervisor Cluster Deep Dive – STREAM DOWNLOAD
  • HBI1761BE – Project Pacific 101: The Future of vSphere – STREAM DOWNLOAD
  • HBI4500BE – Project Pacific: Guest Clusters Deep Dive – STREAM DOWNLOAD
  • HBI4501BE – Project Pacific: Native Pods Deep Dive – STREAM DOWNLOAD
  • HBI4937BE – Introducing Project Pacific: Transforming vSphere into the App Platform of the Future – STREAM DOWNLOAD
  • KUB1840BE – Run Kubernetes Consistently Across Clouds with Tanzu & Project Pacific – STREAM DOWNLOAD
  • KUB1851BE – Managing Clusters: Project Pacific on vSphere & Tanzu Mission ControlSTREAM DOWNLOAD

Podcasts:

Labs / Hands-On:

  • HOL-2013-01-SDC – Project Pacific – Lightning Lab: https://labs.hol.vmware.com/HOL/catalogs/lab/6877

Other interesting sources:

Feel free to reach out if you are missing any interesting sessions here – happy to update this post anytime! @bbrundert

Running Kubernetes on VMware – a brief overview about the options

After I shared a brief overview about running Containers on vSphere, I’d like to go a little further this time and share the VMware intergration points with Kubernetes. As you are probably aware of, VMware has just released it’s own Kubernetes distribution called „Pivotal Container Service“ which became generally available on Februrary 12, 2018. While this is VMware’s and Pivotal’s recommended and preferred way to run Kubernetes in an Enterprise environment, there are several options to either consume any other Kubernetes distribution or build Kubernetes solely from opensource. No matter what path you choose, VMware has a variety of solutions to offer.

 

Compute / Cloud Provider

Network & Security

Storage

Monitoring

Container Management

  • Project Harbor: An Enterprise-class Container Registry Server based on Docker Distribution, embedded in Pivotal Container Service and vSphere Integrated Containers
  • vRealize Automation: governance and self-service to request K8s clusters, nodes and e.g. namespaces; custom integration/blueprints required

3rd Party Documentation

Running containers on vSphere – a brief overview about the options

Inspired by a tweet by Kendrick Coleman I decided to quickly summarize the current VMware and Pivotal provided options to running containers on vSphere.

  • vSphere Integrated Containers (VIC): running containers as VMs on vSphere
    • Virtual Container Host (VCH): nearly complete Docker API support, one container per Container Host VM („Container VM“)
    • Docker Container Hosts (DCH): native Docker API support, multiple containers per Container Host VM
  • Pivotal Container Service (PKS): enterprise grade K8s distribution
  • Pivotal Application Service (PAS); formerly known as Pivotal Cloud Foundry (PCF): integrated Platform-as-a-Service offering based on Cloud Foundry
  • VMware Integrated OpenStack with Kubernetes (VIOK): running K8s on top of OpenStack
  • Photon OS: open source minimal Linux container host optimized for cloud-native applications for vSphere
  • Container Service Extension for vCloud Director: a vCloud Director add-on that manages the life cycle of Kubernetes clusters for tenants.

 

In addition to these integrated solutions, you can also build or bring your own solution and leverage projects such as:

  • Project Hatchway: persistent storage for Cloud Native Applications
  • NSX: software-defined networking and security for containers
  • Project Harbor: Enterprise-class Container Registry Server based on Docker Distribution
  • Project Admiral: Highly Scalable Container Management Platform
  • Weathervane: application-level performance benchmark designed to allow the investigation of performance tradeoffs in modern virtualized and cloud infrastructures

vSphere Integrated Containers 1.1 has been released

Great news for all vSphere customers that are looking at ways to run Containers on vSphere. VMware’s product team just released vSphere Integrated Containers 1.1 on VMware.com.

A quick reminder: the VMware product vSphere Integrated Containers references the core Engine (open source project), the Registry (aka open source project Harbor) as well as the Management Portal (aka open source project Admiral). You can pick the individual open components from GitHub and run them on your own terms (see e.g. Harbor example from a recent Golang conference in China) or use the integrated & supported product delivered by VMware.

Key highlights from the Release Notes:

  • A unified OVA installer for all three components
  • Upgrade from version 1.0
  • Official support for vSphere Integrated Containers Management Portal
  • A unified UI for vSphere Integrated Containers Registry and vSphere Integrated Containers Management Portal
  • A plug-in for the HTML5 vSphere Client
  • Support for Docker Client 1.13 and Docker API version 1.25
  • Support for using Notary with vSphere Integrated Containers Registry
  • Support for additional Docker commands. For the list of Docker commands that this release supports, see Supported Docker Commands in Developing Container Applications with vSphere Integrated Containers. (Link)

 

You can also use vic-machine upgrade to upgrade the Virtual Container Hosts. From the Upgrade Guide:

When you upgrade a running VCH, the VCH goes temporarily offline, but container workloads continue as normal during the upgrade process. Upgrading a VCH does not affect any mapped container networks that you defined by setting the vic-machine create --container-network option. The following operations are not available during upgrade:

  • You cannot access container logs
  • You cannot attach to a container
  • NAT based port forwarding is unavailable

IMPORTANT: Upgrading a VCH does not upgrade any existing container VMs that the VCH manages. For container VMs to boot from the latest version of bootstrap.iso, container developers must recreate them.

With the release of vSAN 6.6 and vCenter 6.5d, you might want to test out VIC 1.1 in your test/lab environment and leverage it to build a great platform for your development teams. Speaking of compatibility:

 

There is also a new demo video that shows the product & the updated User Interfaces in more detail. Check out the video here:

Time to update the lab!

vSphere Integrated Containers 0.9 (OSS version) now available

Great news for everyone that wants to run Docker on vSphere. VMware released the Open Source version of vSphere Integrated Containers 0.9 which is now available via bintray here.

Please note: this is an interim pre-release and does not include support from VMware global support services (GSS). Support is OSS community level only.

Changes from the 0.8 version are documented here:

You can now go to https://vmware.github.io/vic-product/index.html#getting-started and see the documentation for the lastest official product version (in this case VIC Engine 0.8 as part of vSphere Integrated Containers 1.0) and the current OSS release (in this case VIC Engine 0.9).

Supported Docker Commands for 0.9 are listed in the documentation at https://vmware.github.io/vic-product/assets/files/html/latest/vic_app_dev/container_operations.html.

vSphere Integrated Containers Engine 0.7 has been released

vic-engine

Only few days ago, the vSphere Integrated Containers team released the newest version 0.7 on GitHub and Bintray. I just want to summarize a few resources for tests with this release here and document some gotchas that have already been raised. Remember: this code is still a beta release so don’t deploy it to production immediately. You can also read up on the announcement of VIC as part of vSphere 6.5 in the official press release from VMworld.

Here are the links:

During the installation, you can now specify a fixed IP address instead of DHCP for your Virtual Container Host (VCH) – this is one of the new features in the 0.7 release. Please make sure to use –dns-server with your vic-machine command to set the DNS server address in the VCH. Otherwise it will use the network gateway which results in some timeout errors during the installation. There is already an issue documented at https://github.com/vmware/vic/issues/3060.

If you deploy VIC in your environment and encounter any issues, please open a issue on GitHub (https://github.com/vmware/vic/issues). You can also reach out to myself via Twitter and I’ll try to get back to you as soon as possible.

VMware Cloud Foundation – links to key product resources

There has been a lot of VMworld coverage over the last 10 days. I just wanted to summarize some of the key resources around one of the big announcements: VMware Cloud Foundation!

Please note, there will also be a webinar on September 13, 2016 so make sure to check this out in case you are looking for some more details on this!

vcf1

vcf2

Reset to Standard vSwitch from Distributed vSwitch on homelab Intel NUC

I just had to reset my homelab Intel NUC’s ESXi 6.0 network configuration because I wanted to test a specific setting in vSphere Integrated Containers. Unfortunately, the Intel NUC only has one physical uplink and that uplink (and VMkernel Portgroup) was configured on a Distributed vSwitch – I needed it on a Standard vSwitch for the test. Migrating the VMkernel Portgroup from the Distributed to a Standard vSwitch was a little challenging and I didn’t want to set up an external monitor to use the Direct Console User Interface (DCUI). But with the help of William’s ESXi virtual appliance and some hints in the vSphere documentation, I was able to reproduce the necessary keyboard inputs and perform it with only a USB keyboard attached to the NUC. Instead of summarizing it only for myself, I though I’ll share it here as I couldn’t find similar instructions on google.

Please don’t do this in a production environment, blindly configuring a system isn’t a good idea.

tl;dr: the steps are: F2 – TAB – <root_password> – ENTER – DOWN – DOWN – DOWN – DOWN – ENTER – DOWN – ENTER – F11

 

What is actually going on if you could view DCUI? First, you need to use/press F2 (and potentially „fn“ or similar) to get into ESXi’s DCUI system management:

Bildschirmfoto 2016-08-01 um 07.58.16

It will ask you to authenticate first (pressing TAB – <root_password> – ENTER):

Bildschirmfoto 2016-08-01 um 08.15.31

Then, you need to go to „Network Restore Options“ in the System Customization menu (pressing DOWN – DOWN – DOWN – DOWN – ENTER):

Bildschirmfoto 2016-08-01 um 07.58.48

And in the „Network Restore Options“, you’ll have the option to „Restore Standard Switch“ (pressing DOWN – ENTER – F11):

Bildschirmfoto 2016-08-01 um 07.59.11

After selecting „Standard Switch“, you’ll need to confirm a new dialog with „F11“ and then a new vSwitch will be created on your host. Mine worked like a charm, I found a new Standard vSwitch with vmk0 using my „old“ management IP address for ESXi.