Podcast: VMUG UserCon Germany live

🇩🇪: Ich hatte es bei der VMUG in unterschiedlichen Situationen auch immer wieder gesagt: es gibt eigentlich keinen besseren Zeitpunkt mit dem Erstellen von neuem Content und dem Teilen mit der Community anzufangen, als jetzt. Also habe ich mein portables Podcast Studio (mein Mikrofon) eingepackt und mich in Frankfurt vor Ort ganz spontan mit einigen Besuchern der VMUG UserCon (und TAM Summit) unterhalten und so ein paar live Eindrücke eingesammelt.

Ein herzliches Dankeschön an alle Gesprächspartnerinnen und Gesprächspartner, die spontan und völlig unvorbereitet einfach mitgemacht haben! Das hat echt Spaß gemacht!

Viel Spaß beim Anhören und ich freue mich immer über Feedback!

🇺🇸/🇬🇧: I’ve said it repeatedly in various situations at the VMUG: there’s really no better time to start creating new content and sharing it with the community than now. So, I packed my portable podcast studio (my microphone) and spontaneously spoke with some attendees of the VMUG UserCon (and TAM Summit) on site in Frankfurt to gather a few live impressions.

A heartfelt thank you to all the conversation partners who joined in so spontaneously and completely unprepared! It was really a lot of fun!

VMware Project Pacific – collection of materials

Blogposts:

VMworld US 2019:

VMworld Europe 2019 sessions:

  • HBI1452BE – Project Pacific: Supervisor Cluster Deep Dive – STREAM DOWNLOAD
  • HBI1761BE – Project Pacific 101: The Future of vSphere – STREAM DOWNLOAD
  • HBI4500BE – Project Pacific: Guest Clusters Deep Dive – STREAM DOWNLOAD
  • HBI4501BE – Project Pacific: Native Pods Deep Dive – STREAM DOWNLOAD
  • HBI4937BE – Introducing Project Pacific: Transforming vSphere into the App Platform of the Future – STREAM DOWNLOAD
  • KUB1840BE – Run Kubernetes Consistently Across Clouds with Tanzu & Project Pacific – STREAM DOWNLOAD
  • KUB1851BE – Managing Clusters: Project Pacific on vSphere & Tanzu Mission ControlSTREAM DOWNLOAD

Podcasts:

Labs / Hands-On:

  • HOL-2013-01-SDC – Project Pacific – Lightning Lab: https://labs.hol.vmware.com/HOL/catalogs/lab/6877

Other interesting sources:

Feel free to reach out if you are missing any interesting sessions here – happy to update this post anytime! @bbrundert

Running Kubernetes on VMware – a brief overview about the options

After I shared a brief overview about running Containers on vSphere, I’d like to go a little further this time and share the VMware intergration points with Kubernetes. As you are probably aware of, VMware has just released it’s own Kubernetes distribution called „Pivotal Container Service“ which became generally available on Februrary 12, 2018. While this is VMware’s and Pivotal’s recommended and preferred way to run Kubernetes in an Enterprise environment, there are several options to either consume any other Kubernetes distribution or build Kubernetes solely from opensource. No matter what path you choose, VMware has a variety of solutions to offer.

 

Compute / Cloud Provider

Network & Security

Storage

Monitoring

Container Management

  • Project Harbor: An Enterprise-class Container Registry Server based on Docker Distribution, embedded in Pivotal Container Service and vSphere Integrated Containers
  • vRealize Automation: governance and self-service to request K8s clusters, nodes and e.g. namespaces; custom integration/blueprints required

3rd Party Documentation

Running containers on vSphere – a brief overview about the options

Inspired by a tweet by Kendrick Coleman I decided to quickly summarize the current VMware and Pivotal provided options to running containers on vSphere.

  • vSphere Integrated Containers (VIC): running containers as VMs on vSphere
    • Virtual Container Host (VCH): nearly complete Docker API support, one container per Container Host VM („Container VM“)
    • Docker Container Hosts (DCH): native Docker API support, multiple containers per Container Host VM
  • Pivotal Container Service (PKS): enterprise grade K8s distribution
  • Pivotal Application Service (PAS); formerly known as Pivotal Cloud Foundry (PCF): integrated Platform-as-a-Service offering based on Cloud Foundry
  • VMware Integrated OpenStack with Kubernetes (VIOK): running K8s on top of OpenStack
  • Photon OS: open source minimal Linux container host optimized for cloud-native applications for vSphere
  • Container Service Extension for vCloud Director: a vCloud Director add-on that manages the life cycle of Kubernetes clusters for tenants.

 

In addition to these integrated solutions, you can also build or bring your own solution and leverage projects such as:

  • Project Hatchway: persistent storage for Cloud Native Applications
  • NSX: software-defined networking and security for containers
  • Project Harbor: Enterprise-class Container Registry Server based on Docker Distribution
  • Project Admiral: Highly Scalable Container Management Platform
  • Weathervane: application-level performance benchmark designed to allow the investigation of performance tradeoffs in modern virtualized and cloud infrastructures

vSphere Integrated Containers 1.1 has been released

Great news for all vSphere customers that are looking at ways to run Containers on vSphere. VMware’s product team just released vSphere Integrated Containers 1.1 on VMware.com.

A quick reminder: the VMware product vSphere Integrated Containers references the core Engine (open source project), the Registry (aka open source project Harbor) as well as the Management Portal (aka open source project Admiral). You can pick the individual open components from GitHub and run them on your own terms (see e.g. Harbor example from a recent Golang conference in China) or use the integrated & supported product delivered by VMware.

Key highlights from the Release Notes:

  • A unified OVA installer for all three components
  • Upgrade from version 1.0
  • Official support for vSphere Integrated Containers Management Portal
  • A unified UI for vSphere Integrated Containers Registry and vSphere Integrated Containers Management Portal
  • A plug-in for the HTML5 vSphere Client
  • Support for Docker Client 1.13 and Docker API version 1.25
  • Support for using Notary with vSphere Integrated Containers Registry
  • Support for additional Docker commands. For the list of Docker commands that this release supports, see Supported Docker Commands in Developing Container Applications with vSphere Integrated Containers. (Link)

 

You can also use vic-machine upgrade to upgrade the Virtual Container Hosts. From the Upgrade Guide:

When you upgrade a running VCH, the VCH goes temporarily offline, but container workloads continue as normal during the upgrade process. Upgrading a VCH does not affect any mapped container networks that you defined by setting the vic-machine create --container-network option. The following operations are not available during upgrade:

  • You cannot access container logs
  • You cannot attach to a container
  • NAT based port forwarding is unavailable

IMPORTANT: Upgrading a VCH does not upgrade any existing container VMs that the VCH manages. For container VMs to boot from the latest version of bootstrap.iso, container developers must recreate them.

With the release of vSAN 6.6 and vCenter 6.5d, you might want to test out VIC 1.1 in your test/lab environment and leverage it to build a great platform for your development teams. Speaking of compatibility:

 

There is also a new demo video that shows the product & the updated User Interfaces in more detail. Check out the video here:

Time to update the lab!

vSphere Integrated Containers Engine 0.7 has been released

vic-engine

Only few days ago, the vSphere Integrated Containers team released the newest version 0.7 on GitHub and Bintray. I just want to summarize a few resources for tests with this release here and document some gotchas that have already been raised. Remember: this code is still a beta release so don’t deploy it to production immediately. You can also read up on the announcement of VIC as part of vSphere 6.5 in the official press release from VMworld.

Here are the links:

During the installation, you can now specify a fixed IP address instead of DHCP for your Virtual Container Host (VCH) – this is one of the new features in the 0.7 release. Please make sure to use –dns-server with your vic-machine command to set the DNS server address in the VCH. Otherwise it will use the network gateway which results in some timeout errors during the installation. There is already an issue documented at https://github.com/vmware/vic/issues/3060.

If you deploy VIC in your environment and encounter any issues, please open a issue on GitHub (https://github.com/vmware/vic/issues). You can also reach out to myself via Twitter and I’ll try to get back to you as soon as possible.

Reset to Standard vSwitch from Distributed vSwitch on homelab Intel NUC

I just had to reset my homelab Intel NUC’s ESXi 6.0 network configuration because I wanted to test a specific setting in vSphere Integrated Containers. Unfortunately, the Intel NUC only has one physical uplink and that uplink (and VMkernel Portgroup) was configured on a Distributed vSwitch – I needed it on a Standard vSwitch for the test. Migrating the VMkernel Portgroup from the Distributed to a Standard vSwitch was a little challenging and I didn’t want to set up an external monitor to use the Direct Console User Interface (DCUI). But with the help of William’s ESXi virtual appliance and some hints in the vSphere documentation, I was able to reproduce the necessary keyboard inputs and perform it with only a USB keyboard attached to the NUC. Instead of summarizing it only for myself, I though I’ll share it here as I couldn’t find similar instructions on google.

Please don’t do this in a production environment, blindly configuring a system isn’t a good idea.

tl;dr: the steps are: F2 – TAB – <root_password> – ENTER – DOWN – DOWN – DOWN – DOWN – ENTER – DOWN – ENTER – F11

 

What is actually going on if you could view DCUI? First, you need to use/press F2 (and potentially „fn“ or similar) to get into ESXi’s DCUI system management:

Bildschirmfoto 2016-08-01 um 07.58.16

It will ask you to authenticate first (pressing TAB – <root_password> – ENTER):

Bildschirmfoto 2016-08-01 um 08.15.31

Then, you need to go to „Network Restore Options“ in the System Customization menu (pressing DOWN – DOWN – DOWN – DOWN – ENTER):

Bildschirmfoto 2016-08-01 um 07.58.48

And in the „Network Restore Options“, you’ll have the option to „Restore Standard Switch“ (pressing DOWN – ENTER – F11):

Bildschirmfoto 2016-08-01 um 07.59.11

After selecting „Standard Switch“, you’ll need to confirm a new dialog with „F11“ and then a new vSwitch will be created on your host. Mine worked like a charm, I found a new Standard vSwitch with vmk0 using my „old“ management IP address for ESXi.

Kostenloses VMware Online Technology Forum 2015 am Mittwoch, 25.11.2015

onlinetechforumwebbanner900

Am 25.11. findet das kostenlose VMware Online Technology Forum von 10:00-14:30 CET statt!

Nach einer Keynote von Joe Baguley, VMware CTO für EMEA, gibt es zum einen diverse spannende Breakout Sessions tracks mit prominenten Speakern:

  • Software-Defined Data Center: Infrastructure (What’s new in vSphere, vRealize Operations Insight 6.1, EVO:RAIL 2.0, EVO SDDC, Virtual SAN, Virtual Volumes, Site Recovery Manager)
  • Software-Defined Data Center: New Services (vRealize Automation 7.0, VMware Integrated OpenStack 2.0, Cloud-Native Applications & Containers, vRealize Business Update, DevOps mit vRealize CodeStream)
  • Software-Defined Networking (NSX 6.2 Update, Network Functions Virtualization (NFV), Micro-Segmentation & NSX Security Partner Integrationen, Cross-Data Center NSX, NSX & vRealize Automation)
  • Hybrid Cloud (What’s new in vCloud Air Disaster Recovery, VMware Continuent replication for Oracle, Deep-Dive on vCloud Air Advanced Networking Services, …)
  • Business Mobility (AirWatch 8.1, VMware User Environment Manager Deep-Dive, VMware Horizon Flex, What’s new in Horizon (View) 6.2, Horizon Air, …)

Weiterhin gibt es noch Hands-On Labs und eine Expert Chat Zone.

Weiterführende Links:

Performance tuning of Telco and NFV workloads

Today, VMware released a new technical whitepaper for performance tuning of Telco and NFV workloads in vSphere. You can download the paper here: https://www.vmware.com/resources/techresources/10479

NFV_perf_WP

 

From the document description:

The vSphere ESXi hypervisor provides a high-performance and competitive platform that effectively runs many Tier 1 application workloads in virtual machines. By default, ESXi has been heavily tuned for driving high I/O throughput efficiently by utilizing fewer CPU cycles and conserving power, as required by a wide range of workloads.

However, Telco and NFV application workloads are different from the typical Tier I enterprise application workloads, in that they tend to be any combination of latency sensitive, jitter sensitive, or demanding high packet rate throughputs or aggregate bandwidth, and therefore need to be tuned for best performance on vSphere ESXi.

This white paper summarizes the findings and recommends best practices to tune the different layers of an application’s environment for Telco and NFV workloads.

VMware Launch Event – Feb 2, 2015

What an exciting February! I just want to share some of my initial highlights from last nights launch of vSphere 6.0, Virtual SAN 6.0, VMware Integrated OpenStack and much more! There is so much content – make sure to check out https://www.vmware.com/now for additional broadcasts and materials. You will also find lots of tweets (#VMW28days) and amazing blogposts looking at the well-known blogs.

vSphere 6.0

 Virtual SAN 6.0

https://twitter.com/chuckhollis/status/562356325175525376

https://twitter.com/CaptainVSAN/status/562364473663836160

https://twitter.com/SanDiskDataCtr/status/562392713648017409

VMware Integrated OpenStack

Hands-On Labs Updates