Voyage across the Clouds with Kubernetes

A Talky rocket flying amongst the clouds

In the DevOps world, Kubernetes is kind of a big deal. Since 2014, when development first began, Kubernetes has become the preeminent container orchestration tool for running containerized applications on the web. When I joined &yet in 2016, I was new to the world of DevOps, but since then I’ve had the opportunity to use Kubernetes on four different cloud platforms: Packet, Amazon Web Services (AWS), Microsoft Azure, and Google Cloud. That experience has demonstrated the consistent power of Kubernetes across different platforms and the freedom it give teams to change providers as cloud technology evolves.

One of my major realizations about working in DevOps is that technology is moving FAST. It has been less than six month since the Cloud Native Computing Foundation launched their certified Kubernetes program and Microsoft, Amazon, and Google have all launched managed Kubernetes solutions in the interim. (If you’re interested to know the origin of the “k8s” abbrev check out this Medium post by @rothgar.) I’ve deployed the services that power Talky across multiple providers and wanted to share that experience with you! The choice to use k8s as our primary orchestration tool has given our team the ability to maximize the strengths of different cloud providers and choose the best provider for a given endeavor.

Kubernetes is a versatile tool designed by some great folks* and backed by a thriving open-source community. Unfortunately, there is a fairly substantial learning curve for most folks picking up k8s for the first time. In this post, I won’t go into the k8s architecture or how to get started with it, since there are many great videos on that topic freely available. If you learn best by doing, check out Kubernetes By Example by the OpenShift Team, or for a deeper dive try Kubernetes The Hard Way by Kelsey Hightower.

Packet

My first experience of Kubernetes was in 2016 with a CoreOS cluster hosted on Packet running version 1.2.2. Our Ops team configured the cluster with a lengthy cloud-config file which included etcd settings, systemd files, SSL files, and YAML manifests for the master services. It worked, but updating anything was a massive pain; so painful in fact, it was easier to build an entirely new cluster instead of trying to upgrade nodes beyond version 1.2 as they became available. Thanks to the meteoric growth of DevOps technology in the past several years there are now better solutions to running k8s on Packet. We still think that Packet is awesome, especially since they increased the number and variety of offerings over the past year.

An &yet yeti hugging a cloud wearing a K8s hat

In early 2017 we made the decision as an Ops team to embrace managed cloud solutions. As a small team we don’t manage any physical servers on-prem, so the decision to rely on cloud solutions has freed up our team to spend more time optimizing our DevOps processes and less time maintaining servers.

AWS

Since we already used AWS, one of the first tools we tried was kops. True to its unofficial(?) tagline of “kubectl for clusters”, kops lets you create and maintain k8s clusters from the command line. Once your AWS IAM permission and DNS settings are are set up correctly (spoiler alert: not a trivial endeavor) you can spin up a k8s cluster in just a few commands. kops is still under active development and it now has beta support for Google Compute Engine (GCE) and early stage support for DigitalOcean, OpenStack, and VMware vSphere. Although kops is an effective way to manage Kubernetes clusters, it still required a fair bit of manual configuration before it was usable. We thought there had to be something better out there.

AWS is a powerful platform and it arguably has the most product offerings of any major cloud providers, but they were slightly behind-the-times when it came to Kubernetes. AWS announced their managed k8s solution, Elastic Container Service for Kubernetes (EKS), just a few weeks after Microsoft Azure and Google Cloud Platform. Because the other providers launched managed k8s first, we moved on to try Azure and GCP. Read on to hear of our adventures voyaging through these new lands!

Azure

After a brief trial with kops we decided to give Azure a chance as we were intrigued by reports of slightly lower prices compared to their competitors. Sadly, we moved to Azure just two months ahead of their announcement of managed Kubernetes (AKS), so we haven’t yet tried it. Even using Azure Container Services, the predecessor to AKS, the process of building a k8s cluster on Azure was surprisingly pleasant. We had a cluster going quickly and were able to start migrating services from our old cluster right away. Although the set up was fairly smooth, within a few months of running production services on Azure, we encountered limitations of their platform that necessitated a weird hack to iptables and a separate incident caused by VMs rebooting unexpectedly overnight. These issues, along with the profusion of legacy services that still exist in the Azure stack prompted us to abandon Azure after only three months. My impression is that Microsoft is continuing to develop their support for Kubernetes, as evidenced by their acquisition of Deis Workflow in early-2017, and their support of a neat k8s sandboxing tool called Draft. Azure offerings have evolved in the past few months, and some of our negative experiences may have just been due to poor timing.

Google Cloud

The final leg of our voyage across the clouds led our team to adopt Google Cloud Platform. We had already been trialing Google Kubernetes Engine (GKE) for a subset of services, and were extremely impressed. Spinning up a cluster takes about 5 minutes (actually tho) and we got things migrated in less than a week. As a new user, I have been delighted by GCP’s clean, intuitive web GUI, and CLI. We’ve been running k8s services on GKE for about 6 months now and have been very happy. Kubernetes was originally designed by former Googlers so I’m glad to see how seamlessly GCP has integrated k8s into their Compute Engine service.

Final thoughts

My experience with four major cloud providers in the past year has been an educational journey to say the least. We made decisions about transitioning between cloud providers based on our needs, ease of use, and state of the technology at a given time. It is clear to anyone following the growth of DevOps technology that it’s not going to slow down any time soon. The best available solutions will likely evolve and change on a monthly, and in some cases even a daily basis! We’ve seen eight major releases of Kubernetes since I joined the world of DevOps and it seems like every day there’s a new blog post or announcement about k8s from a major player in this space. While we’ve arrived at a good place for our Ops needs right now, the voyage is far from over.

*former Googlers Joe Beda, Craig McLuckie, and Brendan Burns

Blog Archives: