Running containers on vSphere – a brief overview about the options

Inspired by a tweet by Kendrick Coleman I decided to quickly summarize the current VMware and Pivotal provided options to running containers on vSphere.

  • vSphere Integrated Containers (VIC): running containers as VMs on vSphere
    • Virtual Container Host (VCH): nearly complete Docker API support, one container per Container Host VM (“Container VM”)
    • Docker Container Hosts (DCH): native Docker API support, multiple containers per Container Host VM
  • Pivotal Container Service (PKS): enterprise grade K8s distribution
  • Pivotal Application Service (PAS); formerly known as Pivotal Cloud Foundry (PCF): integrated Platform-as-a-Service offering based on Cloud Foundry
  • VMware Integrated OpenStack with Kubernetes (VIOK): running K8s on top of OpenStack
  • Photon OS: open source minimal Linux container host optimized for cloud-native applications for vSphere
  • Container Service Extension for vCloud Director: a vCloud Director add-on that manages the life cycle of Kubernetes clusters for tenants.

 

In addition to these integrated solutions, you can also build or bring your own solution and leverage projects such as:

  • Project Hatchway: persistent storage for Cloud Native Applications
  • NSX: software-defined networking and security for containers
  • Project Harbor: Enterprise-class Container Registry Server based on Docker Distribution
  • Project Admiral: Highly Scalable Container Management Platform
  • Weathervane: application-level performance benchmark designed to allow the investigation of performance tradeoffs in modern virtualized and cloud infrastructures

vSphere Integrated Containers 1.1 has been released

Great news for all vSphere customers that are looking at ways to run Containers on vSphere. VMware’s product team just released vSphere Integrated Containers 1.1 on VMware.com.

A quick reminder: the VMware product vSphere Integrated Containers references the core Engine (open source project), the Registry (aka open source project Harbor) as well as the Management Portal (aka open source project Admiral). You can pick the individual open components from GitHub and run them on your own terms (see e.g. Harbor example from a recent Golang conference in China) or use the integrated & supported product delivered by VMware.

Key highlights from the Release Notes:

  • A unified OVA installer for all three components
  • Upgrade from version 1.0
  • Official support for vSphere Integrated Containers Management Portal
  • A unified UI for vSphere Integrated Containers Registry and vSphere Integrated Containers Management Portal
  • A plug-in for the HTML5 vSphere Client
  • Support for Docker Client 1.13 and Docker API version 1.25
  • Support for using Notary with vSphere Integrated Containers Registry
  • Support for additional Docker commands. For the list of Docker commands that this release supports, see Supported Docker Commands in Developing Container Applications with vSphere Integrated Containers. (Link)

 

You can also use vic-machine upgrade to upgrade the Virtual Container Hosts. From the Upgrade Guide:

When you upgrade a running VCH, the VCH goes temporarily offline, but container workloads continue as normal during the upgrade process. Upgrading a VCH does not affect any mapped container networks that you defined by setting the vic-machine create --container-network option. The following operations are not available during upgrade:

  • You cannot access container logs
  • You cannot attach to a container
  • NAT based port forwarding is unavailable

IMPORTANT: Upgrading a VCH does not upgrade any existing container VMs that the VCH manages. For container VMs to boot from the latest version of bootstrap.iso, container developers must recreate them.

With the release of vSAN 6.6 and vCenter 6.5d, you might want to test out VIC 1.1 in your test/lab environment and leverage it to build a great platform for your development teams. Speaking of compatibility:

 

There is also a new demo video that shows the product & the updated User Interfaces in more detail. Check out the video here:

Time to update the lab!

vSphere Integrated Containers 0.9 (OSS version) now available

Great news for everyone that wants to run Docker on vSphere. VMware released the Open Source version of vSphere Integrated Containers 0.9 which is now available via bintray here.

Please note: this is an interim pre-release and does not include support from VMware global support services (GSS). Support is OSS community level only.

Changes from the 0.8 version are documented here:

You can now go to https://vmware.github.io/vic-product/index.html#getting-started and see the documentation for the lastest official product version (in this case VIC Engine 0.8 as part of vSphere Integrated Containers 1.0) and the current OSS release (in this case VIC Engine 0.9).

Supported Docker Commands for 0.9 are listed in the documentation at https://vmware.github.io/vic-product/assets/files/html/latest/vic_app_dev/container_operations.html.

vSphere Integrated Containers Engine 0.7 has been released

vic-engine

Only few days ago, the vSphere Integrated Containers team released the newest version 0.7 on GitHub and Bintray. I just want to summarize a few resources for tests with this release here and document some gotchas that have already been raised. Remember: this code is still a beta release so don’t deploy it to production immediately. You can also read up on the announcement of VIC as part of vSphere 6.5 in the official press release from VMworld.

Here are the links:

During the installation, you can now specify a fixed IP address instead of DHCP for your Virtual Container Host (VCH) – this is one of the new features in the 0.7 release. Please make sure to use –dns-server with your vic-machine command to set the DNS server address in the VCH. Otherwise it will use the network gateway which results in some timeout errors during the installation. There is already an issue documented at https://github.com/vmware/vic/issues/3060.

If you deploy VIC in your environment and encounter any issues, please open a issue on GitHub (https://github.com/vmware/vic/issues). You can also reach out to myself via Twitter and I’ll try to get back to you as soon as possible.

vSphere Integrated Containers 0.4 – Inspecting VCH and ContainerVM

Last week, I had an interesting conversation with my friend Michael on vSphere Integrated Containers (VIC) in it’s current version 0.4. We discussed some of the key concepts and how they relate to other container implementations out there. I decided to summarize the key observations with a little more detail here as I expect this information to be interesting for operations teams once they start running VIC.
Please note: this is based on the currently available Open Source VIC project in version 0.4 running on vSphere 6.0 in my homelab.
For simplicity reasons, I decided to go with a “standalone ESXi” installation of my Virtual Container Host (VCH) in this example.

First, I created a new container host called “VCH001” on my ESXi host from my PhotonOS based worker VM:
root@photonbox [ /workspace/vic ]# ./vic-machine-linux create --bridge-network 'VCH Bridge' --external-network 'VM Network' --image-datastore mydatastore --target 'root@esxi01.think-v.com' --name VCH001

The output of this command shows the necessary details:
INFO[2016-08-06T19:51:56Z] Please enter ESX or vCenter password:
INFO[2016-08-06T19:52:00Z] ### Installing VCH ####
INFO[2016-08-06T19:52:00Z] Generating certificate/key pair - private key in ./VCH001-key.pem
INFO[2016-08-06T19:52:02Z] Validating supplied configuration
INFO[2016-08-06T19:52:05Z] Firewall status: DISABLED on /ha-datacenter/host/esxi01.think-v.com/esxi01.think-v.com
INFO[2016-08-06T19:52:05Z] Firewall configuration OK on hosts:
INFO[2016-08-06T19:52:05Z] /ha-datacenter/host/esxi01.think-v.com/esxi01.think-v.com
INFO[2016-08-06T19:52:05Z] License check OK
INFO[2016-08-06T19:52:05Z] DRS check SKIPPED - target is standalone host
INFO[2016-08-06T19:52:07Z] Creating Resource Pool VCH001
INFO[2016-08-06T19:52:07Z] Creating appliance on target
INFO[2016-08-06T19:52:07Z] Network role client is sharing NIC with external
INFO[2016-08-06T19:52:07Z] Network role management is sharing NIC with external
INFO[2016-08-06T19:52:09Z] Uploading images for container
INFO[2016-08-06T19:52:09Z] bootstrap.iso
INFO[2016-08-06T19:52:09Z] appliance.iso
INFO[2016-08-06T19:52:22Z] Waiting for IP information
INFO[2016-08-06T19:52:42Z] Waiting for major appliance components to launch
INFO[2016-08-06T19:52:44Z] Initialization of appliance successful
INFO[2016-08-06T19:52:44Z]
INFO[2016-08-06T19:52:44Z] Log server:
INFO[2016-08-06T19:52:44Z] https://VCH_IP:2378
INFO[2016-08-06T19:52:44Z]
INFO[2016-08-06T19:52:44Z] DOCKER_HOST=VCH_IP:2376
INFO[2016-08-06T19:52:44Z]
INFO[2016-08-06T19:52:44Z] Connect to docker:
INFO[2016-08-06T19:52:44Z] docker -H VCH_IP:2376 --tls info
INFO[2016-08-06T19:52:44Z] Installer completed successfully

More details about the inner workings can be found in the VIC 0.4 blogposts by Cormac that are also listed in the link section below. In this post I’d like to focus more on the topic of state information and how this is handled in VIC 0.4.

First of all, it is important to understand the difference between VCHs in VIC in comparison to other (in this case linux-based) container solutions. While each container in a N:1 model (containers:linux) has its private namespace, the underlying shared kernel provides the container control plane to look into containers and perform process-related actions (start, stop, …). Runtime environment and control plane are directly coupled.

In VIC, the runtime/execution environment of the container is a so called containerVM (based on PhotonOS) which is decoupled from it’s “control plane”, the Virtual Container Host itself. This creates a new layer of abstraction where communication flow but also state information needs to be captured and made available.

To establish a secure communications path between these two components, VIC also introduces the concept of a Tether to connect into the actual containerVM. This concept is part of the Port Layer Abstractions that allows VIC to be extensible. More details are described on the VIC Container Abstractions documentation page.

Let me share a summary of how the VCH and containerVMs actually look like on the infrastructure – and where information on state is actually stored. At first, let me go into the VMX file of the VCH. As expected, there are two vNICs attached:

ethernet0.virtualDev = "vmxnet3"
ethernet0.networkName = "VM Network"
ethernet0.pciSlotNumber = "192"
ethernet0.uptCompatibility = "TRUE"
ethernet0.present = "TRUE"
ethernet1.virtualDev = "vmxnet3"
ethernet1.networkName = "VCH Bridge"
ethernet1.pciSlotNumber = "224"
ethernet1.uptCompatibility = "TRUE"
ethernet1.present = "TRUE"

Here, we also find the boot disk that got transferred with the deployment of the VCH:

ide0:0.deviceType = "cdrom-image"
ide0:0.fileName = "appliance.iso"
ide0:0.present = "TRUE"

The general approach for storing state information is described in the Configuration persistence mechanism overview documentation. According to this, VIC actually makes use of the vSphere extraConfig and guestinfo mechanisms to store relevant information. But where do extraConfig and guestinfo actually reside? In a normal vSphere VM, this information is stored in the VMX file of the VM (and remember, a container in VIC actually is a VM – the containerVM).

Starting a simple “hello-world” container should trigger the whole workflow that also creates a new VM. But let’s go through it step by step:

root@photonbox [ /workspace/vic ]# docker -H VCH_IP:2376 --tls run -it hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
a3ed95caeb02: Pull complete
c04b14da8d14: Pull complete
Digest: sha256:548e9719abe62684ac7f01eea38cb5b0cf467cfe67c58b83fe87ba96674a4cdd
Status: Downloaded newer image for library/hello-world:latest

Looking at the recently executed containers from my worker VM, we can see the following reference:

root@photonbox [ /workspace/vic ]# docker -H VCH_IP:2376 --tls ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2cf7f483bf6e hello-world "/hello" Less than a second ago Stopped jolly_panini

So our container ran as ID 2cf7f483bf6e. How does that containerVM actually look on our standalone ESXi host and even more interestingly, where does the information about the container (from docker ps -a) come from?

First of all, there is a newly created VM named 2cf7f483bf6e7f32daa53f51ca388d5fb153f78d3a74d313318099086638ad58 – just as expected. Looking at the VMX file, we’ll find a lot of session information that we already found in docker ps -a:

guestinfo./common/name = "jolly_panini"
guestinfo./sessions|2cf7f483bf6e7f32daa53f51ca388d5fb153f78d3a74d313318099086638ad58/common/name = "jolly_panini"
guestinfo./sessions|2cf7f483bf6e7f32daa53f51ca388d5fb153f78d3a74d313318099086638ad58/cmd/Path = "/hello"
guestinfo./repo = "hello-world"

The containerVM mounts the “bootstrap.iso” from the VCH001’s VM folder (that also got deployed via the vic-machine installer):

ide0:0.deviceType = "cdrom-image"
ide0:0.fileName = "/vmfs/volumes/d37f7a1b-0ab13c48/VCH001/bootstrap.iso"
ide0:0.present = "TRUE"

The containerVM also has a serial connection to the VCH (explanation):

serial0.allowGuestConnectionControl = "FALSE"
serial0.fileType = "network"
serial0.fileName = "tcp://VCH_IP:8080"
serial0.network.endPoint = "client"
serial0.yieldOnMsrRead = "TRUE"
serial0.present = "TRUE"
serial0.hardwareFlowControl = "TRUE"

The containerVM’s network adapter is connected on the “VCH Bridge” portgroup and therefore only talks to the VCH. This is where the container traffic is flowing, management and control plane traffic is going via serial0.

ethernet0.virtualDev = "vmxnet3"
ethernet0.networkName = "VCH Bridge"
ethernet0.pciSlotNumber = "192"
ethernet0.uptCompatibility = "TRUE"
ethernet0.present = "TRUE"

The containerVM also has it’s own harddisk (attached VMDK):

scsi0.virtualDev = "pvscsi"
scsi0.present = "TRUE"
scsi0:0.deviceType = "scsi-hardDisk"
scsi0:0.fileName = "2cf7f483bf6e7f32daa53f51ca388d5fb153f78d3a74d313318099086638ad58.vmdk"
scsi0:0.present = "TRUE"

To delete the VCH and the containerVMs, vic-machine-linux is called with the “delete” option:

root@photonbox [ /workspace/vic ]# ./vic-machine-linux delete --target esxi01.think-v.com --user root --name VCH001
INFO[0000] Please enter ESX or vCenter password:
INFO[2016-08-06T20:50:24Z] ### Removing VCH ####
INFO[2016-08-06T20:50:28Z] Removing VMs
INFO[2016-08-06T20:50:33Z] Removing images
INFO[2016-08-06T20:50:34Z] Removing volumes
INFO[2016-08-06T20:50:36Z] Removing appliance VM network devices
INFO[2016-08-06T20:50:38Z] Bridge network was not created during VCH deployment, leaving it there
INFO[2016-08-06T20:50:40Z] Removing Resource Pool VCH001
INFO[2016-08-06T20:50:40Z] Completed successfully

 

In summary, all container state information is kept close to the containerVM, stored in the VMX file. VCH and containerVM use the ISO-files that are tranferred during the vic-machine install process. VIC also introduces a new level of abstraction between control plane and execution environment that allows VIC to be extensible for future usecases.

 

Additional links around VIC 0.4 – most of them by Cormac: