Kubecon 2017 – Berlin: a personal report

Following the establishing best practices of ‘open sourcing’ my trip reports of conferences I attend, I am copying and pasting hereafter my raw comments related to the recent Kubecon EMEA 2017 trip.

I have done something similar for Dockercon 2016 and Serverlessconf 2016 last year and given the feedbacks I had received, this is something worthwhile apparently.

As always:

  1. these reports contains some facts (hopefully I did get those right) plus personal opinions and interpretations.
  2. If you are looking for a properly written piece of art, this is not it.
  3. these are, for the most part, raw notes (sometimes written laying on the floor during those oversubscribed sessions at the conference)

Take it or leave it.

_________________________

Massimo Re Ferre’ – Technical Product Manager – CNA Business Unit @ VMware

Kubecon Europe 2017 – Report

Executive Summary

In general, the event was what I largely expect. We are in the early (but consistent) stages of Kubernetes adoption. The ball is still (mainly) in the hands of geeks (gut feeling: more devs than ops). While there are pockets of efforts on the Internet to help un-initiated to get started with Kubernetes, the truth is that there is still a steep learning curve you have to go through pretty much solo. Kubecon 2017 Europe was an example of this approach: you don’t go there to learn Kubernetes from scratch (e.g. there are no 101 introductory sessions). You go there because you know Kubernetes (already) and you want to move the needle listening to someone else’s (advanced) experiences. In this sense Dockercon is (vastly) different compared to Kubecon. The former appears to be more of a “VMworld minime” at this point, while the latter is still more of a “meetup on steroids”.

All in all, the enthusiasm and the tail winds behind the project are very clear. While the jury is still out re who is going to win in this space, the odds are high that Kubernetes will be a key driving force and a technology that is going to stick around. Among all the projects at that level of the stack, Kubernetes is clearly the one with the most mind-share.

These are some additional (personal) core takeaways from the conference:

  • K8s appears to be extremely successful with startups and small organizations as well as in pockets of Enterprises. The technology has not been industrialized to the point where it has become a strategic choice (not yet at least). Because of this, the prominent deployment model seems to be “you deploy it, you own it, you consume it”. Hence RBAC, multi-tenancy and security haven’t been major concerns. We are at a stage though where, in large Enterprises, these teams that own the deployment are seeking for IT help and support in running Kubernetes for them.

  • The cloud native landscape is becoming messier and messier. The CNCF Landscape slide is making a disservice to the cloud native beginners. It doesn’t serve any other purpose than officially underline the complexity of this domain. While I am probably missing something about the strategy here, I am puzzled how the CNCF is creating category A and category B projects by listing hundreds of projects in the landscape but only selecting a small subset to be part of the CNCF.

  • This is a total gut feeling (I have no data to back this up) but up until 18/24 months ago I would have said the container orchestration/management battle was among Kubernetes, Mesos and Docker. Fast forward to these days, it is my impression that Mesos is fading out a bit. These days the industry seems to be consolidating around two major centers of gravity: one is Kubernetes and its ecosystem of distributions, the other being Docker (Inc.) and their half proprietary stack (Swarm and UCP). More precisely there seems to be a consensus that Docker is a better fit and getting traction for those projects that cannot handle the Kubernetes complexity (and consider K8s being a bazooka to shoot a fly) while Kubernetes is a better fit and getting traction for those projects that can absorb the Kubernetes complexity (and probably requires some of its advanced features). In this context Mesos seems to be in search of its own differentiating niche (possibly around big data?).

  • The open source and public cloud trends are so pervasive in this domain of the industry that it is also changing some of the competitive and positioning dynamics. While in the last 10/15 years the ‘lock-in’ argument was around ‘proprietary software’open source software’. Right now, the ‘lock-in’ argument seems to be around ‘proprietary public cloud services’ Vs. ‘open source software’. Proprietary software doesn’t even seem to be considered a contender in this domain. Instead, its evil role has been assumed by the ‘proprietary cloud services’. According to the CNCF, the only way you can fight this next level of lock-in is through (open source) software that you have the freedom to instantiate on-prem or off-prem at your will (basically de-coupling the added-value services from the infrastructure). This concept was particularly clear in Alexis Richardson’s keynote.

Expo

The Expo was pretty standard and what you’d expect to see. Dominant areas of ecosystem seem to be:

  • Kubernetes setup / lifecycle (to testify that this is a hot/challenging area)
  • Networking
  • Monitoring

My feeling is that storage seems to be “under represented” (especially considering the interest/buzz around stateful workloads). There were not a lot of startups representing this sub-domain.

Monitoring, on the other hand, seems to be ‘a thing’. Sematext and Sysdig (to name a few) have interesting offerings and solutions in this area. ‘We have a SaaS version and an on-prem version if you want it’ is the standard delivery model for these tools. Apparently.

One thing that stood out to me was Microsoft’s low profile at the conference (particularly compared to their presence at, say, Dockercon). There shouldn’t be a reason why they wouldn’t want to go big on Kubernetes (too).

Keynote (Wednesday)

There are 1500 attendees circa at the conference. Given the polls during the various breakout sessions, the majority seem to be devs with a minority of ops (of course boundaries are a bit blurry in this new world).

The keynote opens up with the news (not news) that Containerd is joining the CNCF. RKT makes the CNCF too. Patrick C and Brandon P get on stage briefly to talk about, respectively, Containerd and RKT.

Aparna Sinha (PM at Google) gets on stage to talk about K8s 1.6 (just released). She talks about the areas of improvement (namely 5000 hosts support, RBAC, dynamic storage provisioning). One of the new (?) features in the scheduler allows for “taint” / “toleration” which may be useful to segment specific worker nodes for specific namespaces e.g. dedicated nodes to tenants (this needs additional research).

Apparently RBAC has been contributed largely by Red Hat, something I have found interesting given the fact that this is an area where they try to differentiate with OpenShift.

Etcd version 3 gets a mention as having quite a quite big role in the K8s scalability enhancements (note: some customers I have been talking to are a bit concerned about how to [safely] migrate from etcd version 2 to etcd version 3).

Aparna then talks about disks. She suggests to leverage claims to decouple the K8s admin role (infrastructure aware) from the K8s user role (infrastructure agnostic). Dynamic storage provisioning is available out of the box and it supports a set of back end infrastructure (GCE, AWS, Azure, vSphere, Cinder). She finally alludes to some network policies capabilities being cooked up for next version.

I will say that tracking where all (old and new) features sit on the spectrum of experimental, beta, supported (?) is not always very easy. Sometimes a “new” features is being talked about just to find out that it has moved from one stage (e.g. experimental) to the next (i.e. beta).

Clayton Coleman from Red Hat talks about K8s security. Interestingly enough when he polls about how many people stand up and consume their own Kubernetes cluster a VAST majority of users raise their hand (assumption: very few are running a centralized or few centralized K8s instances that users access in multi-tenant mode). This is understandable given the fact that RBAC has just made into the platform. Clayton mention that security in these “personal” environments isn’t as important but as K8s will start to be deployed and managed by a central organization for users to consume it, a clear definition of roles and proper access control will be of paramount importance. As a side note, with 1.6 cluster-up doesn’t enable RBAC by default but Kubeadm does.

Brandon Philips from CoreOS is back on stage to demonstrate how you can leverage a Docker registry to not only push/pull Docker images but entire Helm apps (cool demo). Brandon suggests the standard and specs for doing this is currently being investigated and defined (hint: this is certainly an area that project Harbor should explore and possibly endorse).

Keynote (Thursday)

Alexis Richardson does a good job at defining what Cloud Native is and the associated patterns.

CNCF is “project first” (that is, they prefer to put forward actual projects than just focus on abstract standards -> they want to aggregate people around code, not standards).

Bold claim that all projects in the CNCF are interoperable.

Alexis stresses on the concept of “cloud lock-in” (Vs generic vendor lock-in). He is implying that there are more people that are going to AWS/Azure/GCP consuming higher level services (OSS operationalized by the CSP) compared to the number of people that are using and being locked in by proprietary software.

Huawei talks about their internal use case. They are running 1M (one million!) VMs. They are on a path to reduce those VMs by introducing containers.

Joe Beda (CTO at Heptio) gets on stage. He talks about how to grow the user base 10x. Joe claims that K8s contributors are more concerned about theoretical distributed model problems than they are with solving simpler practical problems (quote: “for most users out there the structures/objects we had 2 or 3 versions ago are enough to solve the majority of the problems people have. We kept adding additional constructs/objects that are innovative but didn’t move the needle in user adoption”).

Joe makes an Interesting comment about finding a good balance between solving products problems in the upstream project Vs solving them by wrapping specific features in K8s distributions (a practice he described as “building a business around the fact that K8s sucks”).

Kelsey Hightower talks about Cluster Federation. Cluster Federation is about federating different K8s clusters. The Federation API control plane is a special K8s client that coordinates dealing with multiple clusters.

BREAKOUT SESSIONS

These are some notes I took while attending breakout sessions. In some sessions, I could physically not step in (sometimes rooms were completely full). I skipped some of the breakouts as I opted to spend more time at the expo.

Containerd

This session was presented by Docker (of course).

Containerd was born in 2015 to control/manage runC.

New in project government (but the code is “old”). It’s a core container runtime on top of which you could build a platform (Docker, K8s, Mesos, etc.)

The K8s integration will look like:

Kubelet -> CRI shim -> containerd -> containers

No (opinionated) networking support, no volumes support, no build support, no logging management support etc. etc.

Containerd uses gRPC and exposes gRPC APIs.

There is the expectation that you interact with containerd through the gRPC APIs (hence via a platform above). There is a containerd API that is NOT expected to be a viable way for a standard user to deal with containerd. That is to say… containerd will not have a fully featured/supported CLI. It’s code to be used/integrated into higher level code (e.g. Kubernetes, Docker etc.).

gRPC and container metrics are exposed via Prometheus end-point.

Full Windows support is in plan (not yet into the repo as of today).

Speaker (Justin Cormack) mentions VMware a couple of times as an example of an implementation that can replace containerd with a different runtime (i.e. VIC Engine).

Happy to report that my Containerd blog post was fairly accurate (albeit it did not go into much details): http://www.it20.info/2017/03/docker-containerd-explained-in-plain-words/.

Kubernetes Day 2 (Cluster Operations)

Presented by Brandon Philips (CoreOS CTO). Brandon’s session are always very dense and useful. Never a dull moment with him on stage.

This session covered some best practices to manage Kubernetes clusters. What stood out for me in this preso was the mechanism Tectonic uses to manage the deployment: fundamentally CoreOS deploys Kubernetes components as containers and let Kubernetes manage those containers (basically letting Kubernetes manage itself). This way Tectonic can take advantage of K8s own features from keeping the control plane up and running all the way to rolling upgrades of the API/scheduler.

Helm

This session was presented by two of the engineers responsible for the project. The session was pretty full and roughly 80% of attendees claimed to be using K8s in production (wow). Helm is a package manager for Kubernetes. Helm Charts are logical units of K8s resources + variables (note to self: research the differences between “OpenShift applications” and “Helm charts” < -- they appear to be the same thing [or at least are being defined similarly]).

There is a mention about kubeapps.com which is a front-end UI to monocular (details here: https://engineering.bitnami.com/2017/02/22/what-the-helm-is-monocular.html).

The aim of the session was to seed the audience with new use cases for Helm that aspire to go beyond the mere packaging and interactive setup of a multi-container app.

Hyper

The attendance was low. The event population being skewed towards developers tend to greatly penalize sessions that are skewed towards solutions aimed to solve primarily Ops problems.

Their value prop (at the very high level) is similar to vSphere Integrated Containers, or Intel Clear Containers for that matter: run Docker images as virtual machines (as opposed to containers). Hyper proud themselves to be hypervisor agnostic.

They claim a sub-second start-time (similar to Clear Path). Note: while the VIC value prop isn’t purely about being able to start containerVMs fast, tuning for that characteristic will help (half-joke: I would be more comfortable running VIC in production than showing a live demo of it at a meetup).

The most notable thing about Hyper is that it’s CRI compliant and it naturally fits/integrate into the K8s model as a pluggable container engine.