Home Explore Blog CI



kubernetes

2nd chunk of `content/en/blog/_posts/2016-02-00-Sharethis-Kubernetes-In-Production.md`
5a84bdd4b084e0240b3fe357165de7f6d6235ac7a46584020000000100000faa
In late 2015, we were frustrated enough with our legacy infrastructure that we finally bit the bullet. We evaluated Docker’s tools, ECS, Kubernetes, and Mesosphere. It was quickly obvious that Kubernetes was in a more stable and user friendly state than its competitors for our infrastructure. As a company, we could solidify our infrastructure on Docker by simply setting the goal of having all of our infrastructure on Kubernetes.

Engineers were skeptical at first. However, once they saw applications scale effortlessly into hundreds of instances per application, they were hooked. Now, not only was there the pain points driving us forward into Docker and by extension Kubernetes, but there was genuine excitement for the technology pulling us in. This has allowed us to make an incredibly difficult migration fairly quickly. We now run Kubernetes in multiple regions on about 65 large VMs and increasing to over 100 in the next couple months. Our Kubernetes cluster currently processes 800 million requests per day with the plan to process over 2 billion requests per day in the coming months.

#### Kubernetes As A Tool To Manage Containers

Our earliest use of Docker was promising for development, but not so much so for production. The biggest friction point was the inability to manage Docker components at scale. Knowing which containers were running where, what version of a deployment was running, what state an app was in, how to manage subnets and VPCs, etc, plagued any chance of it going to production. The tooling required would have been substantial.



When you look at Kubernetes, there are several key features that were immediately attractive:

* It is easy to install on AWS (where all our apps were running)
* There is a direct path from a Dockerfile to a replication controller through a yaml/json file
* Pods are able to scale in number easily
* We can easily scale the number of VM’s running on AWS in a Kubernetes cluster
* Rolling deployments and rollback are built into the tooling
* Each pod gets monitored through health checks
* Service endpoints are managed by the tool
* There is an active and vibrant community



Unfortunately, one of the biggest pain points was that the tooling didn’t solve our existing legacy infrastructure, it just provided an infrastructure to move onto. There were still a variety of network quirks which disallowed us from directly moving our applications onto a new VPC. In addition, the reworking of so many applications required developers to jump onto problems that have classically been solved by sys admins and operations teams.

#### Kubernetes As A Tool For Onboarding Developers On Infrastructure

When we decided to make the switch from what was essentially a Chef-run setup to Kubernetes, I do not think we understood all of the pain points that we would hit.  We ran our servers in a variety of different ways in a variety of different network configurations that were considerably different than the clean setup that you find on a fresh Kubernetes VPC.  

In production we ran in both AWS VPCs and AWS classic across multiple regions. This means that we managed several subnets with different access controls across different applications. Our most recent applications were also very secure, having no public endpoints. This meant that we had a combination of VPC peering, network address translation (NAT), and proxies running in varied configurations.

In the Kubernetes world, there’s only the VPC.  All the pods can theoretically talk to each other, and services endpoints are explicitly defined. It’s easy for the developer to gloss over some of the details and it removes the need for operations (mostly).  

We made the decision to convert all of our infrastructure / DevOps developers into application developers (really!). We had already started hiring them on the basis of their development skills rather than their operational skills anyway, so perhaps that is not as wild as it sounds.

Title: Kubernetes as a Container Management and Developer Onboarding Tool at ShareThis
Summary
ShareThis chose Kubernetes for its ease of installation on AWS, direct path from Dockerfile to replication controller, scalable pods, built-in rolling deployments and rollbacks, health checks, service endpoint management, and active community. While Kubernetes didn't solve their existing legacy infrastructure issues, it provided an infrastructure to migrate to. ShareThis converted their infrastructure/DevOps developers into application developers, leveraging Kubernetes to simplify infrastructure management and reduce the need for traditional operations roles.