Work from home: my homeoffice setup and gadgets

I have been working from a homeoffice for over 10 years now. But when travel stopped due to COVID-19, lots of things have changed even for me. This article is the beginning of a short blog series where I’ll highlight some of the tools and practices that work for me.

While it was always an option, video conferencing and online collaboration over-night became the new default and it seems like these trends are not going away anytime soon. And given my role, I spend lots of time on meetings with customers, partners & colleagues. I remember the old days when „virtual meetings“ were „conference calls“ and audio quality was the least common denominator audio codec of the participants dialing-in to a bridge. I can’t imagine going through six months of crappy conference calls so I am very grateful for the reliable and high-quality platforms that Zoom and MS Teams have offered us in these difficult times. We left the dark ages of conference calls and audio is now typically transmitted over a broadband IP connection – so even when I join a Zoom meeting on my phone, I don’t even consider to click the „call-back“ option to join the audio anymore. Even when being on the road, the Voice-over-IP stability and quality is outperforming traditional phone calls dramatically.

My personal experience has been that a better audio quality has a very positive impact on productivity & focus and also provides a more inclusive environment. If people have a hard time following a presentation or conversation, a virtual meeting can become more exhausting than necessary. And people with hearing issues might not even be able to fully participate in an active conversation with bad quality. Therefore, I consider it a courtesy to my fellow meeting participants to bring the best possible experience to the virtual conference table.

Before COVID-19, I used a pretty standard Jabra headset and audio quality was average. But I didn’t spent this much time on video conferences after all. So since I upgraded my homeoffice setup a few months ago, I received lots of positive feedback – and questions about the equipment I use. So here we are 🙂

Webcam: I am among the lucky ones that got a decent webcam when all this started. I use a Logitech Brio Ultra HD Pro WebCam that is mounted to the top of my monitor. It’s a decent device – even though I sometimes have the impression the camera has issues with focus.

Brio Ultra HD Pro WebCam
(Image from Logitech)

Light: my office has a decent sized window with lots of natural light coming in – but only on one side. So I put up pretty regular LED uplights in the other side of the room to get some better light coverage from both sides. And above my webcam & monitor, there is an Elgato Key Light Air because… well it’s there now and works. It fit nicely with my Elgato Stream Deck panel that I use for some desk automation – but that’s a different story.

Elgato Key Light Air
(Image from Elgato)

Audio: the audio setup has been a little more complicated. I experimented with a few things over time and looked e.g. at several Blue microphones but wasn’t 100% convinced. Coincidentially, there is this company named „Sennheiser“ (you might have heard of them ;-)) which has their global HQ not too far away from where I live. And since Sennheiser equips lots of major opera houses, live broadcasting events and artists like Ed Sheeran with high quality microphones for decades now, I was sure they must have something for upping my Zoom calls as well. And what can I say? It’s been love at first sight.

So a Sennheiser Handmic Digital is now part of my homeoffice equipment and I mounted this into a standard microphone arm. What impressed me right away is the fact that it’s super easy to use – the „plug and play“ promise is not just marketing. My MacBook recognized the device immediately and I have not configured anything special. It’s just a new audio device. The digital experts from Apogee are providing the technology for the digital audio converter and pre-amp that consolidates potentially multiple devices into a slick and all-metal body. It comes with USB as well as a Apple Lightning connectivity. My dear and beloved travel companion for more than 4 years, a Sennheiser PXC 550 Wireless, as well as a basic 2.1 Logitech speaker setup serve me well from an audio consumption aspect.

Microphone comparison: MacBook, Webcam, Jabra headset, Sennheiser
Microphone comparison: MacBook vs. Sennheiser
Sennheiser Handmic Digital (Picture from Sennheiser)
HAUEA Microphone Arm (Picture from Amazon)
Sennheiser PSX 550 Wireless (Picture from Sennheiser)

Thanks for reading! Feel free to reach out via Twitter for comments or discussions!

VMware Project Pacific – collection of materials

Blogposts:

VMworld US 2019:

VMworld Europe 2019 sessions:

  • HBI1452BE – Project Pacific: Supervisor Cluster Deep Dive – STREAM DOWNLOAD
  • HBI1761BE – Project Pacific 101: The Future of vSphere – STREAM DOWNLOAD
  • HBI4500BE – Project Pacific: Guest Clusters Deep Dive – STREAM DOWNLOAD
  • HBI4501BE – Project Pacific: Native Pods Deep Dive – STREAM DOWNLOAD
  • HBI4937BE – Introducing Project Pacific: Transforming vSphere into the App Platform of the Future – STREAM DOWNLOAD
  • KUB1840BE – Run Kubernetes Consistently Across Clouds with Tanzu & Project Pacific – STREAM DOWNLOAD
  • KUB1851BE – Managing Clusters: Project Pacific on vSphere & Tanzu Mission ControlSTREAM DOWNLOAD

Podcasts:

Labs / Hands-On:

  • HOL-2013-01-SDC – Project Pacific – Lightning Lab: https://labs.hol.vmware.com/HOL/catalogs/lab/6877

Other interesting sources:

Feel free to reach out if you are missing any interesting sessions here – happy to update this post anytime! @bbrundert

2019-05-30 – Cloud Native Short Takes

KubeCon + CloudNativecon Barcelona 2019 & related announcements

Other community updates

Deploying kubeapps helm chart on VMware Enterprise PKS (lab deployment!)

With the recent announcement of VMware and Bitnami joining forces, I wanted to revisit the kubeapps project on Enterprise PKS earlier today. I followed the community documentation but ran into some smaller issues (see my GitHub comments here) that were coming up in the MongoDB deployment initially.

UPDATE: At first I thought you needed to enable privileged containers in PKS but actually you don’t have to do that! There was a typo in my configuration which led to an unknown flag for the MongoDB deployment. I used the flag „mongodb.securityContext.enable=false“ when deploying the Helm chart but it should have been „mongodb.securityContext.enabled=false“. Thanks to Andres from the Bitnami team for catching this! The instructions below have been updated!

Install Helm

Add the bitnami repo:

helm repo add bitnami https://charts.bitnami.com/bitnami

Add a „kubeapps“ namespace to deploy into

kubectl create namespace kubeapps

Add a Service Account to Tiller

vi rbac-config-tiller.yaml
---
apiVersion: v1
 kind: ServiceAccount
 metadata:
   name: tiller
   namespace: kube-system
 apiVersion: rbac.authorization.k8s.io/v1
 kind: ClusterRoleBinding
 metadata:
   name: tiller
 roleRef:
   apiGroup: rbac.authorization.k8s.io
   kind: ClusterRole
   name: cluster-admin
 subjects:
 kind: ServiceAccount
 name: tiller
 namespace: kube-system 
---
kubectl create -f rbac-config-tiller.yaml

Leverage newly created service account for Tiller:

helm init --service-account tiller

Create Service account for kubeapps-operator

kubectl create serviceaccount kubeapps-operator 

kubectl create clusterrolebinding kubeapps-operator \
--clusterrole=cluster-admin \
--serviceaccount=default:kubeapps-operator

kubectl get secret $(kubectl get serviceaccount kubeapps-operator -o jsonpath='{.secrets[].name}') -o jsonpath='{.data.token}' | base64 --decode

Copy the secret for use in the kubeapps dashboard later on.

Since NSX-T brings an out-of-the-box capability for exposing kubeapps to an external IP address, we can use LoadBalancer and skip the port-forwarding section of the documentation. Following what I found in another bug, I set some extra flags for disabling IPv6:

helm install --name kubeapps --namespace kubeapps bitnami/kubeapps \
--set frontend.service.type=LoadBalancer \
--set mongodb.securityContext.enabled=false \
--set mongodb.mongodbEnableIPv6=false

After a few minutes, the deployed services & deployments should be up and running:

Follow then part three of the instructions to access the dashboard.

2019-05-13 – Cloud Native Short Takes

Hello everyone and welcome to my first Cloud Native Short Take. Following the spirit from my previous efforts, I’d like to share some interesting links and observations that I came across recently. So, lets get right into it:

  • Red Hat Summit carried some interesting updates for customers that run OpenShift on VMware today or plan to do it in the future. There was a joint announcement of a reference architecture for OpenShift on the VMware SDDC. Read more about it on the VMware Office of the CTO Blog, the VMware vSphere Blog as well as the Red Hat Blog.
  • Speaking of announcements, GitHub just announced „GitHub Package Registry“ – a new service that will users allow to bring their packages right to their code. As GitHub puts it: „GitHub Package Registry is a software package hosting service, similar to npmjs.org, rubygems.org, or hub.docker.com, that allows you to host your packages and code in one place. „
  • My friends at Wavefront launched a new capability around observability in microservices land. Check out their blogpost around Service Maps in their Wavefront 3D Observability offering that combines metrics, distributed tracing and histograms. There is also a pretty cool demo on Youtube linked from that post – it’s beautiful!
  • Following the motto „Kubernetes, PKS, and Cloud Automation Services – Better Together!“, the VMware Cloud Automation Services team released a beta integration with Enterprise PKS. Read more about it on their blog and watch the webinar for more details.
  • My friend Cormac is a fantastic resource in all-things cloud-native storage these days. And thankfully, he shares lots of his own discoveries on his blog. His latest post is focused on testing Portworx‘ STORK for doing K8s volume snapshots in an on-prem vSphere environment. Read more about it here. Looking forward to the next post which will include some integration testing with Velero.
  • Speaking of Velero (formerly known as Ark), this project is heading to a version 1.0 release! I am very excited for the team! You can find the first Release Candidate here.
  • And coming back to Cormac’s blog – he just released a „Getting started with Velero 1.0-RC1“ blogpost with his test deployment running Cassandra on PKS on vSphere (leveraging Restic).
  • The Kubernetes 1.15 enhancement tracking is now locked down. You can find the document on Google Docs
  • I came across an interesting talk on InfoQ titled „The Life of a Packet through Istio“
  • Another interesting announcement came from Red Hat and Microsoft around a project called KEDA. KEDA „allows for fine grained autoscaling (including to/from zero) for event driven Kubernetes workloads. KEDA serves as a Kubernetes Metrics Server and allows users to define autoscaling rules using a dedicated Kubernetes custom resource definition“. A very interesting project, check out the blogpost and a TGIK episode from Kris Nóva last Friday.
  • There is some useful material around the Certified Kubernetes Administrator exam in this little study guide
  • Oh and speaking of enablement: I can only recommend you check out the freshly published book „Cloud Native Patterns“ by the amazing Cornelia Davis on Manning.com. I have been following the development of that book via the „MEAP“ program and it’s a pretty great source of information!
  • Several thoughts on choosing the right Serverless Platform

Restarting „CNA Weekly“ as „Cloud Native Short Takes“ & shutting down the newsletter

First of all, I’d like to thank all of you very much for your interest in my „CNA weekly“.
As you might have noticed, I haven’t shared any updates recently. That’s not because there aren’t any news, it’s more related to the format.

After reconsidering various options, I decided to delete my tinyletter account and continue (actually restart) my efforts on my blog at http://bit.ly/cna-shorttakes. You can subscribe via RSS (http://blog.think-v.com/?feed=rss2) e.g. via feedly (if you don’t know it, check it out!) or follow me on Twitter (https://twitter.com/bbrundert). Other options to continue receiving the content via email include RSS-to-email services like Blogtrottr or IFTTT

Thanks again to all of you! 

Take care,
Bjoern

Operationalizing VMware PKS with BOSH – how to get started

I have installed VMware PKS in a variety of environments and I typically show something that helps Platform Operators running PKS to dive even deeper into the status of PKS components beyond the pks cli. One of the key lifecycle components in PKS is called BOSH. BOSH deploys Kubernetes masters and workers and performs a number of other tasks. So how do you get access to BOSH in the easiest way?

Step 1)

  • Login to the Ops Manager VM via ssh: ssh ubuntu@your.opsmanager.com  

Step 2)

  • Open Ops Manager and click on the BOSH Director tile: 
  • Click on the „Credentials“ Tab and search for „BOSH Commandline Credentials“:
  • You will see an output similar like this one:
    {"credential":"BOSH_CLIENT=ops_manager 
    BOSH_CLIENT_SECRET=ABCDEFGhijklmnopQRSTUVWxyz 
    BOSH_CA_CERT=/var/tempest/workspaces/default/root_ca_certificate 
    BOSH_ENVIRONMENT=192.168.1.100 bosh "}
  • Copy and paste that line and reformat it the following way:
    BOSH_CLIENT=ops_manager 
    BOSH_CLIENT_SECRET=ABCDEFGhijklmnopQRSTUVWxyz 
    BOSH_CA_CERT=/var/tempest/workspaces/default/root_ca_certificate 
    BOSH_ENVIRONMENT=192.168.1.100

     

  • Easiest way to get started every time is to make it part of your .bashrc configuration by doing the following:
    • edit your .bashrc and append the outputs from above like this:
    • export BOSH_CLIENT=ops_manager 
      export BOSH_CLIENT_SECRET=ABCDEFGhijklmnopQRSTUVWxyz 
      export BOSH_CA_CERT=/var/tempest/workspaces/default/root_ca_certificate 
      export BOSH_ENVIRONMENT=192.168.1.100
    • logout and login again (or just run the export commands on the CLI manually once)

 

  • Some example commands on how to interact with BOSH (and a nice cheat sheet at https://github.com/DennyZhang/cheatsheet-bosh-A4):
  • bosh deployments
    PKS=$(bosh deployments | grep ^pivotal | awk '{print $1;}')
    bosh -d $PKS vms
    bosh -d $PKS instances
    bosh -d $PKS tasks
    bosh -d $PKS tasks -ar
    bosh -d $PKS task 724
    bosh -d $PKS task 724 --debug
    
    CLUSTER=$(bosh deployments | grep ^service-instance | awk '{print $1;}')
    bosh -d $CLUSTER vms
    bosh -d $CLUSTER vms --vitals
    bosh -d $CLUSTER tasks --recent=9
    bosh -d $CLUSTER task 2009 --debug
    bosh -d $CLUSTER ssh master/0
    bosh -d $CLUSTER ssh worker/0
    bosh -d $CLUSTER logs
    bosh -d $CLUSTER cloud-check

     

  • Advanced users: you can also install the BOSH CLI on and admin VM and run from there:
    • Download from https://github.com/cloudfoundry/bosh-cli/releases
    • Copy the certificate from the Ops Manager VM (/var/tempest/workspaces/default/root_ca_certificate) to your admin VM and edit the .bashrc environment variables accordingly

See you at VMworld in Barcelona

I look forward to another exciting VMworld in Barcelona. If you want to meet during the event, ping me on Twitter or find me here in person:
  • Monday: Run Kubernetes on VMware Workshop prior to VMworld, presenting on Open Source Kubernetes fundamentals from 10:15-11:15, TAM Day Expert Roundtables and afterwards from 11:15 onwards
  • Tuesday: VMware PKS booth at the VMworld Solutions Exchange from 10:30-14:30
  • Wednesday: VMware PKS booth at the VMworld Solutions Exchange from 10:30-15:00
  • Thursday: VMware PKS booth at the VMworld Solutions Exchange from 10:30-12:30

Have a safe trip and lots of fun! 

#vK8s – friends don’t let friends run Kubernetes on bare-metal

Over the past months, I had multiple conversations on why you would want to virtualize containers or Kubernetes. The „containers are somewhat providing virtualization – why should I do it at the server level as well?“ myth has been around for some time now. Before I start addressing this, let me take a quick step back here. When I started my career roughly 10 years ago in datacenter operations, virtualization wasn’t mainstream in many environments. I learned a lot about operating physical machines before I got to work on virtual infrastructures at scale. I also worked with multiple vendors and used several „Lights Out Management“ solutions and their basic automation capabilities to get my hardware up and running. But it was always a „yes, it’s getting easier from now on“ moment when vSphere was ready for configuration. While I enjoyed working in operations, I was always happy to set something up without plugging cables in or working on a server in the datacenters. 

I have worked with customers that fully embraced virtualization and have been 100% virtualized for years. They have benefited so much from this move and were able to simplify so many of their operational tasks while doing this. Even if they chose a 1:1 mapping for few extremly demanding VMs to a given host, this was still the better option. Having a consistent infrastructure and operational framework outpaces the potential drawbacks or „virtualization overhead“ (another myth) if you look at the bigger picture. Even though I haven’t been working in operations for some time now, I still remember what it means to be called during the night or deal with spontaneous changes in plans/projects all the time. And businesses and therefore IT are only moving faster – automation, „software-defined“ and constant improvements should be part of everyone’s daily business in operations.

For me, this applies to all workloads – from your traditional legacy applications to modern application runtime frameworks such as Kubernetes or event-driven architectures that are leveraging Functions-as-a-Service capabilities. Most of them co-exist all the time and it’s not a one-or-the-other conversation but an AND conversation. Even highly demanding workloads such as core telco applications are put on virtual infrastructure these days, enabled by automation and Open Source API definitions. All of these can be operated on a consistent infrastructure layer with a consistent operational model. Infrastructure silos have been broken down over the past decade and VMware has invested a lot to make vSphere a platform for all workloads. So when someone mentions bare-metal these days all I can ask myself is „why would I ever want to go back“? I sometimes wonder if all the challenges that virtualization took away have simply been forgotten – it just ran too well.

So what are my personal reasons to run containers on a virtual infrastructure & vSphere in specific?

  1. Agility, Independence & Abstraction: scale, repair, lifecycle & migrate underlying components independently from your workloads; if you ever worked in operations, this is daily business (datacenter move, new server vendor selected, major storage upgrades, … there are tons of reasons why this is still a thing)
  2. Density: run multiple k8s clusters/tenants on same hardware cluster, avoid idle servers e.g. due to N+1 availability concepts
  3. Availability and QoS: you can plan for failures without compromising density, you can even ensure SLOs/SLAs by enforcing policies (networking, storage, compute, memory) that will also be enforced during outages (NIOC, SIOC, Resource Pools, Reservations, Limits, …)
  4. Performance: better-than-physical performance & resource management (core ESXi scheduling, DRS & vMotion, vGPUs, …)
  5. Infrastructure as Code: automate all the things on an API-driven Software Defined Datacenter stack
  6. Security & Isolation: yep, still a thing
  7. Fun fact: even Google demoes K8s on vSphere as part of their „GKE on-prem“ offering 😉 

There has been a ton of material published around this topic recently (and some awesome foundational work by Michael Gasch incl. his KubeCon talk), I want to list a few of the public resources here:

Introducing: #vK8s

So, no matter what your favorite Kubernetes framework is these days – I am convinced it runs best on a virtual infrastructure and of course even better on vSphere. Friends don’t let friends run Kubernetes on bare-metal. And what hashtag could summarize this better than something short and crips like #vK8s ? I liked this idea so much that I created some „RUN vK8s“ images (inspired by my colleagues Frank Denneman and Duncan Epping – guys, it’s been six years since RUN DRS!) that I want to share with all of you. You can find the repository on GitHub – feel free to use them whereever you like. 

CNA weekly #009

The good thing about flight delays and spending time in hotel rooms is that it finally gives me the opportunity to do some long overdue work on the CNA weekly. There are so many things that I want to share in this edition and I hope you’ll find it useful again.

Let me start with a loud shout-out to the global Harbor community. I am so extremely happy to see this great open source project receiving some well-deserved recognition: Harbor joins Cloud Native Computing Foundation (CNCF) and is now the newest adopted sandbox project!

As many of you know, besides its highly successful existance in the Open Source community, Harbor is also an important piece in VMware’s Cloud-Native Applications efforts, specifically in vSphere Integrated Containers as well as Pivotal Container Service. Both of them saw several updates since the last edition of the weekly: PKS 1.1 is now available (incl. K8s 1.10, Multi Availabilty Zone Support, Multi-Master in beta, …) and VIC 1.4 has also been released. Check out the sections below for more details and links to the downloads.

But wait, there is more: VMware also announced a new cloud service called VMware Kubernetes Engine (VKE). VKE will be a multi-cloud managed Kubernetes-as-a-Service offering with some pretty unique features like the „Smart Cluster“ implementation that picks the optimal instance types for your k8s cluster and much more. Right now it is built natively on AWS but it’ll head to Azure as well – but you can manage it with the same set of policies! Learn more about VKE in the links below – and you can also sign up for the beta there.

Another topic that is very close to my heart: how do you want to run your containers and platforms? When I started my career in IT in a large organization, I quickly learned the value and benefits that virtualization brings not only to the consumers but also to the operators of the infrastructure. And running containers is no exception here. Make sure to look into a great new whitepaper („Containers on Bare-Metal or Virtual Machines?„) and look out for a must-watch VMworld 2018 session presented by Michael Gasch and Frank Denneman.

But let’s move on to some content:

Open Source & Community updates

Harbor

Pivotal Container Service (PKS)

VMware Kubernetes Engine

vSphere Integrated Containers

Function-as-a-Service & Serverless

Platform Reliability Engineering & Operations

Other news from VMware

Keeping it fun