Podcast: Wir müssen über Private Cloud reden

Alle Plattformen:
– Spotify: https://open.spotify.com/episode/0RdE5A12nz4hGvQjDRXJKJ?si=WwWtBVcSQn6EEdsE84G-6A
– Apple Podcasts: https://podcasts.apple.com/de/podcast/epic-tech-podcast/id1555109397?i=1000713856281

Modern Private Cloud Launch – VCF9

In dieser Episode des Epic Tech Podcasts diskutieren Bjoern Brundert und Christopher Hartung die neuesten Entwicklungen bei Broadcom und VMware, insbesondere die Einführung der Cloud Foundation. Sie beleuchten die Herausforderungen der letzten Jahre in der IT, die Notwendigkeit von Integration und Automatisierung sowie die Bedeutung von Betriebsprozessen und Reporting. Zudem wird die Rolle von Kubernetes und die Optimierung von Ressourcen durch NVMe Memory Tiering thematisiert. Abschließend wird auf die Innovationskraft von VMware und die Investitionen in die Zukunft eingegangen.

  • 00:00 Intro
  • 01:16 Vorstellung von Christopher Hartung und seinen Aufgaben
  • 02:55 Cloud Foundation 9: Die neue Plattform
  • 03:49 Fundamentale Neuerungen und Innovationen
  • 06:18 Der Mehrwert der IT und Kundenorientierung
  • 09:31 Betriebsprozesse und Integration
  • 09:53 Reporting und Auditing in der Cloud
  • 12:38 Patch Management und Automatisierung
  • 15:09 Konsumererfahrung und VCF Automation
  • 18:37 Policy Management und Compliance
  • 19:57 Policy Management und Umgebungen
  • 26:28 Kubernetes und Container Management
  • 32:32 Innovation und Investitionen in VCF
  • 36:48 NVMe Memory Tiering und Effizienz
  • 44:17 Outtro

State of app certifications on Kubernetes in 2025

The software industry has undergone significant changes in how applications are deployed and supported, particularly with the rise of containerization and Kubernetes. Traditionally, application vendors certified their software for specific operating system versions. Kubernetes introduced a new paradigm that potentially changes this support model. In this post, I want to examine whether it is time to shift from traditional support models to supporting Kubernetes (API) versions.

Traditional OS-Specific Support Model

How the Traditional Model Works

In the traditional model, software vendors test and certify their applications against specific operating system versions and patch level. This approach creates a tightly controlled environment where the vendor can ensure compatibility and performance. For enterprise applications vendors like SAP, Oracle or Microsoft this model has been the standard for decades – with detailed compatibility matrices specifying app versions are supported on which OS versions and what patchlevel.

Benefits of the Traditional Model

  1. Predictability and Stability: By limiting supported environments, vendors can thoroughly test their applications in controlled conditions, reducing the risk of unexpected behavior
  2. Clear Support Boundaries: When issues arise, there’s a clear delineation of responsibility between the application vendor, OS vendor, and customer
  3. Simplified Testing: Vendors only need to test against a finite number of OS versions rather than an infinite combination of configurations
  4. Established Processes: Enterprise customers are familiar with this model and have built their IT operations around it

The Kubernetes Paradigm Shift

How Kubernetes Changes Application Deployment

More than 10 years ago, first Docker and then Kubernetes started to introduce an abstraction layer between applications and the underlying infrastructure, using containers to package applications with their dependencies. This creates a more consistent runtime environment regardless of the underlying infrastructure. Kubernetes provides standardized APIs for deploying, scaling, and managing containerized applications across different environments – including fundamental infrastructure abstractions like storage (volumes) or networking capabilities.

The Kubernetes Conformance Testing Process

Kubernetes is governed by the Cloud-Native Computing Foundation (CNCF). And the CNCF has established a process to ensure “Kubernetes actually is Kubernetes”. This is called conformance testing. Kubernetes conformance is verified through Sonobuoy, a diagnostic tool that runs the official conformance test suite. The conformance test suite checks whether Kubernetes distributions meet the project’s specifications, ensuring compatibility across different implementations by covering APIs, networking, storage, scheduling, and security. Only API endpoints that are generally available and non-optional features are eligible for conformance testing.

The CNCF has made significant progress in conformance coverage – from approximately 11% of endpoints covered by tests in 2018 to less than 6% of endpoints not covered by conformance tests as of Kubernetes 1.25. This comprehensive coverage means that any CNCF-certified Kubernetes distribution must support the same core APIs, regardless of the underlying infrastructure or vendor-specific additions.

The Case for Kubernetes API-Based Support

At the same time, many application vendors are still applying the traditional OS-specific support model to Kubernetes by certifying their applications for specific commercial Kubernetes distributions. This approach treats each Kubernetes distribution as if it were a different operating system, requiring separate certification and support processes.

Benefits of Supporting Kubernetes API Versions

  1. Simplified Support Model: By focusing on Kubernetes API compatibility, support becomes more straightforward, with clearer boundaries between application issues and infrastructure issues
  2. Improved Portability: By supporting Kubernetes API versions rather than specific distributions, applications become more portable across different environments
  3. Vendor Lock-in Concerns: Some customers worry about vendor lock-in with specific Kubernetes distributions and want assurance their applications will work in other environments
  4. Reduced Certification Burden: Application vendors can focus on testing against standard Kubernetes APIs rather than multiple distributions, reducing the testing matrix significantly
  5. Faster Innovation: Application vendors can release updates more quickly without waiting for certification across multiple Kubernetes distributions
  6. Alignment with Cloud-Native Principles: This approach better aligns with the cloud-native philosophy of platform independence and portability
  7. Kubernetes API Stability Guarantees: Kubernetes provides strong API stability guarantees, especially for GA (Generally Available) APIs, making them reliable targets for support
  8. Support Ecosystem: Established support channels and partnerships exist between application vendors and Kubernetes distribution providers

Challenges with the Kubernetes API-Based Approach

While there are many advantages to this approach, of course there are some potential challenges.

  1. Distribution-Specific Features: Different Kubernetes distributions offer unique features and extensions that applications might leverage
  2. Enterprise Customer Expectations: Many enterprise customers still expect vendor certification for their specific Kubernetes platform
  3. Varying Implementation Details: Despite standardized APIs, implementations can vary across distributions, potentially causing subtle compatibility issues
  4. Security and Compliance Requirements: Enterprise customers often have specific security and compliance requirements that are tied to particular distributions

Practical Considerations for Application Vendors

So what can be done about the situation? And how could the cloud-native community move forward towards a new model? As always in life, a balanced approach might be the best option to get started – and evolve from there.

Implementing a Balanced Approach

  1. Define Core Requirements: Specify minimum Kubernetes version and required capabilities (like RWO volumes or load balancing) rather than specific distributions.
  2. Tiered Support Model: Offer different levels of support – full support for tested distributions and best-effort support for any Kubernetes cluster meeting API requirements.
  3. Leverage Kubernetes Certification: The CNCF Kubernetes Certification Program ensures distributions meet API compatibility requirements, providing a baseline for support.
  4. Container-Focused Testing: Test applications in containers across different environments to ensure consistent behavior regardless of the underlying distribution.
  5. Documentation and Guidance: Provide clear documentation on required Kubernetes features and configuration, enabling customers to self-validate their environments.

Conclusion & Comment

The transition from OS-specific support models to Kubernetes API-based support represents a significant evolution in application deployment and management. And a huge shift towards choice for customers and end-users. But while there are compelling reasons to adopt a more platform-agnostic approach based on Kubernetes API versions, many application vendors still rely on distribution-specific support.

As a user, I’d challenge application vendors that say “my app is only supported on X”. So is it a super cool cloud-native, container-based workload? A modern app that promises business benefits, a better operational experience or better resiliency?

Because if the container-based app can’t run on any conformant Kubernetes, it might actually be better suited for a traditional OS deployment model inside a VM. The operational burden of using and maintaining specific distributions or specific versions of distributions is just adding complexity – where Kubernetes and cloud-native apps actually promised a “next-gen” experience.

Podcast: VMUG UserCon Germany live

🇩🇪: Ich hatte es bei der VMUG in unterschiedlichen Situationen auch immer wieder gesagt: es gibt eigentlich keinen besseren Zeitpunkt mit dem Erstellen von neuem Content und dem Teilen mit der Community anzufangen, als jetzt. Also habe ich mein portables Podcast Studio (mein Mikrofon) eingepackt und mich in Frankfurt vor Ort ganz spontan mit einigen Besuchern der VMUG UserCon (und TAM Summit) unterhalten und so ein paar live Eindrücke eingesammelt.

Ein herzliches Dankeschön an alle Gesprächspartnerinnen und Gesprächspartner, die spontan und völlig unvorbereitet einfach mitgemacht haben! Das hat echt Spaß gemacht!

Viel Spaß beim Anhören und ich freue mich immer über Feedback!

🇺🇸/🇬🇧: I’ve said it repeatedly in various situations at the VMUG: there’s really no better time to start creating new content and sharing it with the community than now. So, I packed my portable podcast studio (my microphone) and spontaneously spoke with some attendees of the VMUG UserCon (and TAM Summit) on site in Frankfurt to gather a few live impressions.

A heartfelt thank you to all the conversation partners who joined in so spontaneously and completely unprepared! It was really a lot of fun!

EPIC Tech Podcast: Live Eindrücke von der VMUG Usercon Deutschland 2025

(00:03:24) – Brad Tompkins, VMUG Executive Director

(00:07:35) – Paul Turner, VP Products VCF, Broadcom

(00:16:14) – Cormac Hogan, Distinguished Engineering Architect, Broadcom

(00:21:51) – Duncan Epping, Chief Technologist, Broadcom

(00:27:35) – Stefanie Storch, Sr. Manager Services, Broadcom

(00:30:35) – Sven Soella, Audi

(00:34:40) – Clementine Perreaut, TAM, Evoila

(00:37:22) – VKS Hands-On mit Niko

(00:41:11) – Zusammenfassung und persönliche Eindrücke

(00:44:12) – Disclaimer (siehe Shownotes)

Weiterführende Links:

Disclaimer: Der Epic Tech Podcast ist eine Projekt von mir, Björn Brundert. Alle hier besprochenen Inhalte werden von mir nach bestem Wissen und Gewissen recherchiert und aufbereitet. Die in diesem Podcast geäußerten Ansichten und Meinungen sind ausschließlich meine eigenen und spiegeln nicht die meines Arbeitgebers oder anderer Organisationen wider. Alle Informationen dienen ausschließlich allgemeinen, informativen und unterhaltenden Zwecken und stellen keine Finanz-, Anlage-, Steuer- oder Rechtsberatung dar. Für individuelle Entscheidungen wenden Sie sich bitte an einen qualifizierten Berater. Investitionen sind mit Risiken verbunden – treffen Sie keine Entscheidungen allein auf Grundlage dieses Podcasts.

Back to the blogging

Personally, I have consumed information through blogs and RSS feeds for more than a decade now. I have been on Google Reader and later on Feedly to aggregate interesting sources and build my reading list. Then, many years ago, Twitter became a “content sharing and discussion platform” in my little tech bubble. Many colleagues and friends in the ecosystem shared their content, companies started to produce high-quality blog posts and everyone linked to content of others – and shared pictures of events and happenings. It’s been a really interesting ride.

Over the past few years, there was an even broader growth in information EVERYWHERE. The mainstream adoption of podcasting, “online events” during COVID, live streaming, continuous growth of Youtube and other video platforms. Also, LinkedIn became a more prominent place for content sharing but also content creation. All while core social media constructs were evolving based on open standards. Fediverse and Mastodon are building on a distributed platform approach for social media. Same applies to Bluesky’s AT protocol. And all of these new platforms allow cross-platform integration through good-old-standards like RSS.

If you followed this blog before, you probably noticed there hasn’t been much activity lately. That’s most likely going to change. And I won’t just focus on professional and datacenter related topics – there is so much interesting stuff going on these days. Over the past few months, I came across many smaller blogs that are maintained by passionate folks that shared their experience in a specific area. And I learned a lot from that. Personally, I want to get back to blogging to contribute, share my experience & personal thoughts. So thanks for reading and see you around soon!

EPIC Tech Podcast: 10 Jahre Container Innovation mit VMware

In dieser Episode werfe ich mit den Kubernetes Experten Alexander Ullah und Robert Guske einen Blick auf knapp 10 Jahre Innovationen im Bereich Container und Kubernetes mit VMware Technologie. Von Project Bonneville bis zum neuesten VMware Cloud Foundation Release.

Was ist aus den initialen Ideen der Container Community geworden, welche Rolle spielen offene Standards in der heutigen Multi-Cloud IT Welt und wie gehts von hier aus weiter? Diese Fragen und viele Anekdoten haben wir in der neuesten Episode des Epic Tech Podcast diskutiert.

0:01:35 Vorstellung Alexander Ullah

0:06:25 Vorstellung Robert Guske

0:12:25 Evolution: wie konsumiert man Datacenter Ressourcen?

0:17:20 Von Infrastructure as Code zu Docker

0:19:23 Von lokaler Docker Runtime zu Self-Service Kubernetes

0:29:00 Warum macht virtuelles Kubernetes viel Sinn?

0:40:46 vSphere 7 (2020): Kubernetes und vSphere verschmelzen

0:49:00 Mehr als Kubernetes Cluster: VMs deklarativ über Kubernetes bereitstellen

0:51:09 Schadet Self-Service meiner operativen Stabilität?

56:37: Was ist neu mit VMware Cloud Foundation 5.2?

1:10:48 Das neue epische Outtro

Podcast Theme generated with: https://udio.com

EPIC Tech Podcast: Staffelfinale

Nach über drei Jahren blicken wir im heutigen Staffelfinale gemeinsam auf über 30 tolle Episoden zurück. Denn mit dieser kurzen Episode endet die erste Staffel.

Demnächst startet an dieser Stelle der „Epic Tech Podcast“. Ein Projekt von Host Björn Brundert, in dem es um spannende Technologien und deren Anwendung gehen wird – sei es in Organisationen, der Bildung oder im Alltag.

Vielen Dank fürs Zuhören bisher und hoffentlich bis bald an gleicher Stelle! Feedback und Ideen jederzeit bitte über LinkedIn oder jede andere Kontaktplattform/E-Mail.

Open Networking Hour – career in tech

Diversity, equality and inclusion should be super important topics for all of us. Not just to have more diverse perspectives in a team or company. Being inclusive needs to be „built-in“ in our actions, not just an afterthought. And equal pay should be the standard for equal work – it’s that simple. These are fundamental principles around how VMware operates. And while we do a lot from a corporate perspective, I believe every individual action contributes to the greater good. 

September 2022 marked twelve years at VMware for me. I have the privilege to work with so many amazing people across organizations and get to see so many interesting things in my daily work. Also, I am always inspired when I meet all these great people from really diverse (lots of non-tech) backgrounds that come together in IT – and how this diversity makes literally everything so much better! I’d love give something back, share my experience – and also listen and learn from all the fantastic experiences and perspectives out there.

Tell me more… what experience can you share, Bjoern?

For those of you that don’t know me. I studied computer science and telecommunications, I was a user (and VMware customer) for several years, I support customers on their overall digital transformation journey as well as specific IT projects, I am a regular speaker at various types of events, I host a podcast, I also get to host a regional meetup around cloud-native technologies such as Kubernetes but I also get to spend time in practical Design Thinking exercises. My current role is Principal Technologist at VMware and I am part of the leadership team for Central Europe, Middle East and Africa. 

Here is my offer to you! 

I have been thinking about how to help people outside of VMware with their career in tech and I decided to offer one hour every week as an „Open Networking Hour“ (if you have a better name, let me know :)) to anyone on my network or e.g. on my network’s network. What do I mean by that? Someone you know (or you!)…

  • … is looking for ideas or insights around how to start or build out a career in tech?
  • … might not really understand what a certain job in tech actually is about and that many don’t require programming or computer science backgrounds at all?
  • … wants to know more about VMware or about working at VMware?
  • … would benefit from an introduction to someone on my network?
  • … just wants to bounce ideas?

How do we do this? 

Let’s have a chat! I set up a calendly account and offer 2x 30min slots each week. Once you sign up there, you’ll get a link to a zoom meeting for the date you selected. And to strengthen diversity specifically, one of the two 30min slots is reserved for females or non-binary people: https://calendly.com/bbrundert

To all my fellow VMware colleagues that are interested in something similar, please feel free to reach out via our internal channels!

I am really excited to see where this goes and look forward to hopefully lots of interesting conversations! Don’t be shy!

#vK8s 2021 edition – friends don’t let friends run Kubernetes on bare-metal

Three years ago, I wrote a blogpost on why you wouldn’t want to run Kubernetes on bare-metal. VMware released a number of platform enhancements over these years and there is a lot of updated material and feedback – also coming from customers. So what are (my personal) reasons to run containers and Kubernetes (short “K8s”) on a virtual infrastructure & vSphere in particular?

Operations: Running multiple clusters on bare-metal is hard

  • Multiple clusters in a virtual environment are a lot easier and each cluster can leverage e.g. it‘s own lifecycle policies (e.g. for K8s version upgrades) instead of forcing one bare-metal cluster to upgrade. Running multiple Kubernetes versions side-by-side might be already or become a requirement in the near future.
  • It also makes lots of sense to run Kubernetes side-by-side with your existing VMs instead of building a new hardware silo and operational complexity
  • VMware’s compute platform vSphere is the de-facto standard for datacenter workloads in companies across industries and operational experience and resources are available across the globe. Bare-metal operations typically introduces new risks and operational complexity.

Availability/Resilience and Quality of service: you can plan for failures without compromising density

  • Virtual K8s clusters could benefit even in „two physical datacenter” scenarios where the underlying infrastructure is spread across both sites. A “stretched” platform (e.g. vSphere with vSAN Stretched Cluster) allows you to run logical three-node Kubernetes control planes in VMs and protect the control plane and workload nodes using vSphere HA.
  • vSphere also allows you to prioritize workloads by configuring policies (networking, storage, compute, memory) that will also be enforced during outages (Network I/O Control, Storage I/O Control, Resource Pools, Reservations, Limits, HA Restart Priorities, …)
    • Restart a failed or problematic Kubernetes node VM before Kubernetes itself even detects a problem.
    • Provide the Kubernetes control plane availability by utilizing mature heartbeat and partition detection mechanisms in vSphere to monitor servers, Kubernetes VMs, and network connectivity to enable quick recovery.
    • Prevent service disruption and performance impacts through proactive failure detection, live migration (vMotion) of VMs, automatic load balancing, restart-due-to-infrastructure failures, and highly available storage

Resource fragmentation, overhead & capacity management: single-purpose usage of hardware resources vs. multi-purpose platform

  • Running Kubernetes clusters virtually and using VMware DRS to balance these clusters across vSphere hosts allows the deployment of multiple K8s cluster on the same hardware setup and increasing utilization of hardware resources
  • When running multiple K8s clusters on dedicated bare-metal hosts, you lose the overall capability to utilize hardware resources across the infrastructure pool
    • Many environments won‘t be able to (quickly) repurpose existing capacity from one bare-metal host in one cluster to another cluster in a short timeframe
  • From a vSphere perspective, Kubernetes is yet another set of VMs and capacity management can be done across multiple Kubernetes clusters; it gets more efficient the more clusters you run
    • Deep integrations with existing operational tools like vRealize Operations allow operational teams to deliver Kubernetes with confidence
  • K8s is only a Day-1 scheduler and does not perform resource balancing based on running pods
    • In case of imbalance on the vSphere layer, vSphere DRS rebalances K8s node VMs across the physical estate to better utilize the underlying cluster and delivers best-of-both-worlds from a scheduling perspective
  • High availability and „stand-by“ systems are cost intensive in bare-metal deployments, especially in edge scenarios: in order to provide some level of redundancy, some spare physical hardware capacity (servers) need to be available. In worst case you need to reserve capacity per cluster which increases physical overhead (CAPEX and OPEX) per cluster.
    • vSphere allows you to share failover capacity incl. incl strict admission control to protect important workloads across Kubernetes clusters because the VMs can be restarted and reprioritized e.g. based on the scope of a failure

Single point of integration with the underlying infrastructure

  • A programmable, Software-Defined Datacenter: Infrastructure as Code allows to automate all the things on an API-driven datacenter stack
  • Persistent storage integration would need to be done for each underlying storage architecture individually, running K8s on vSphere allows to leverage already abstracted and virtualized storage devices
  • Monitoring of hardware components is specific to individual hardware choices, vSphere offers an abstracted way of monitoring across different hardware generations and vendors

Security & Isolation

  • vSphere delivers hardware-level isolation at the Kubernetes cluster, namespace, and even pod level
  • VMware infrastructure also enables the pattern of many smaller Kubernetes clusters, providing true multi-tenant isolation with a reduced fault domain. Smaller clusters reduce the blast radius, i.e. any problem with one cluster only affects the pods in that small cluster and won’t impact the broader environment.
  • In addition, smaller clusters mean each developer or environment (test, staging, production) can have their own cluster, allowing them to install their own CRDs or operators without risk of adversely affecting other teams.

Credits and further reading

#vK8s – friends don’t let friends run Kubernetes on bare-metal

So, no matter what your favorite Kubernetes framework is these days – I am convinced it runs best on a virtual infrastructure and of course even better on vSphere. Friends don’t let friends run Kubernetes on bare-metal. And what hashtag could summarize this better than something short and crips like #vK8s ? I liked this idea so much that I created some “RUN vK8s” images (inspired by my colleagues Frank Denneman and Duncan Epping – guys, it’s been NINE years since RUN DRS!) that I want to share with all of you. You can find the repository on GitHub – feel free to use them whereever you like. 

Work from home: productivity & tools

In my previous post, I wrote about my homeoffice setup and hardware. Today, I’d like to add a few tools that helped me over the past few years and specifically over the last couple of months…

Whiteboard: sometimes, standing in front of a physical whiteboard is the beginning of some amazing brainstorming. While I incorporated lots of online tools and virtual whiteboards into my daily workflows, I don’t want to miss my „real“ whiteboard anymore. Sometimes, the whiteboard is a quick way to dump ideas, tasks or other „loose ends“ from my brain before heading to bed. It’s sometimes the easiest way to get rid of some open thoughts, materialize them somewhere and then categorize and work on them the next day. Especially when working on things in parallel, the amount of ideas and things to consider can be overwhelming – getting them out of my head has become an important strategy in general. For collaboration, an online whiteboard is super helpful. Miro has also done a great job for me and has even replaced my physical whiteboard for some occasions.

Calendar: as mentioned above, I try to dump thoughts, tasks and plans from my memory and persist them in the appropriate format/tool. Events, special dates, deadlines, birthdays, trips, … it all has to be in the work or personal calendar to be helpful for me. Remembering where I have to be next week or next month is not a helpful brain cycle for me – I try to outsource that to a tool. And when a trip or activity requires some preparation in advance, the related efforts have to be planned and documented as To-Do items with a due date on my list (see below) as well. An example from not too long ago: is the passport still valid for the trip to the US? That needs to be checked at least three months in advance. Even better: directly putting a reminder for six months prior to the passport expiry date directly on the To-Do list…

To-Do-App: I tried many ways to keep track of my to-do items – from minimalist (txt file) to note-taking apps to notes on the physical whiteboard to post-it notes… they all had their shortcomings and issues. Universal access and ease of use are key features for me as I believe in dumping stuff from my mind into a tool to not spend time on remembering it. Over the past year or so I have been using Todoist very successfully. Todoist is not only available on all my devices, it also has a very intuitive way to get stuff onto your lists. You basically type in the title of the task, naturally write a date („tomorrow, next tuesday, every sunday, …“) and add a #-add the project name and todoist makes it so. You can even mention someone if you work in a team (or e.g. a family member). If I don’t have time to sort a new task out or pick a date, the new task just ends up in an „inbox“ that I constantly monitor. You can also dictate tasks into an Apple Watch which is the most non-disruptive way to get stuff out of my head. Which brings me back to my concept of getting everything written down. In my to-do list, there are items that are months, even years out. There are recurring items that I do every day or every week. That way, it has become natural to come back to the lists and actually use them. You can separate items out in projects, sections inside a project and then each task can have sub-tasks. Breaking larger tasks down to smaller items also needs to become a natural effort. If that super important task that will take months to complete is just one item on your list, it will not give you emotional gratification to complete it. But breaking it down into smaller items helps to make and see progress. You can also add files, comments, priorities & reminders. I don’t use all of them but I use some of them selectively. I have projects dedicated to „work“, „home“, „personal“ and other larger efforts. I even have „template“ projects that can be exported and imported. In each of my primary projects, I put a section for long-term as well as repeating tasks so they don’t show up all the time. They’ll only appear on the „today“ or „soon“ view that I really love in todoist. It’s an aggregated view across all projects. In one of the recent updates, Todoist also introduced a „boards“ view which reminded me a lot of Trello boards – which is a great way to visualize tasks other than a list. For long-term motivation, Todoist also has a basic gamification feature called „karma“ that tries to motivate you to complete e.g. at least 5 tasks each day or 30 tasks per week. Apple Watch ring completion fans know this helps 😉 Overall, Todoist has been really helpful… (If you feel inspired to use Todoist, I’d appreciate if you follow this referral link :-))

Which brings me to the last tool I’d like to highlight here: time tracking. Constant working from home sometimes feels like days are just passing. But how much am I actually working, how much time goes into meetings, how much time goes into email or self-education? It’s not about providing a timesheet to my boss. It’s about insights where my time goes. In general, „retrospectives“ are a great way to better understand, learn and improve in the future. Doing retrospectives after projects but also individual meetings can be a great tool to constantly improve. But that’s a different topic. I didn’t want to rely on „feelings“ or rough estimates alone. I wanted to see where my time is going. A couple of years ago, my wife was playing around with Timeular but back then it had some technical issues that made her return the device after a few days. But earlier this year, my colleague Robbie mentioned it as well and caught my attention. A few days later, I had my own Timeular device – which is basically a dice with eight sides. It connects via bluetooth to your computer or smart device and you can assign categories to each side. There are stickers to put onto the sides. You can also write on them. Or print your own labels.

As soon as you flip it to one side, the Timeular app picks up the signal and starts tracking the responding category/activity. The cool thing here is, you can enhance those categories with #-tags or e.g. @-mention of people. It took me a while and several iterations but I am happy with my categories and #-tags now (all brainstormed and documented on a Miro board :-)). The Timeular team also just added a cool new keyboard shortcut feature that allows you to start tracking without flipping the device (e.g. when you are not at your desk) or when you want to edit a running session’s category or hashtags without going to the Timeular app. Once you have tracked some efforts, you can interactively generate reports on any timeframe (last week, Sunday till Tuesday, last month, specific year) and see which categories or tags or people are taking what amount or percentage of your tracked time. So at the end of the week, you can see how much time you actually worked overall, how much time went into certain topics and if your „feeling“ about a week is actually reflected in those numbers. It also gives you a good idea about the number of context-switches you do per day. Or when you typically start and finish tracking your day – all including trends over time.
I use the higher-level categories to structure my time tracking into „external facing“ (presenting at events, customer or partner meetings, …), „internal in support of a customer“ (preparing for a presentation, alignment meetings, …), „internal-internal“ (teamcalls, …) but also e.g. „self-development“ (product/company specific, skill development, …), „mentoring“ and „PTO“. But I don’t track „breaks“ during the day. I just put the tracker in the neutral position so it does not track at all. If I work with customer „ACME Corp“, I tag all work for that customer #acmecorp and Timeular autocompletes that hashtag. The hashtag is used across my „internal“ and „external“ activities but allows me to break-down activities easily in the interactive report. I think you get the idea. The categories don’t have to be static either. You can have more than 8 categories and only „enable“ certain categories on the dice for a certain time. I also have a category for „travel“ (well, for some day in the future). But in this case, I wouldn’t flip the dice during travel, this would simply not work. Instead, I can add timeslots in the app manually when the category is not reflected on the dice – or if I forgot to flip it. The physical device makes it very easy – it just sits on my desk and having it there is a constant reminder to actually flip it to the correct side. If you are interested, check out timeular.com (UPDATE Nov 16: you can also folly my referral link if you want 😉 …)

Note: I pay for the pro/premium plan of the services that I mentioned above. Some of the features might not be available in a free plan!