Home Explore Blog CI



kubernetes

3rd chunk of `content/en/blog/_posts/2016-11-00-Skytap-Modernizing-Microservice-Architecture-With-Kubernetes.md`
8e2e1d1d17e7bd39b0155748f2eb91b37b728dc465db163d00000001000010e5
Deployment adds a layer of complexity for us. Docker images are a great way to package software, but historically, we’ve deployed from source, not packages. Our engineering team expects that making changes to source is sufficient to get their work released; devs don’t expect to handle additional packaging steps. Rather than rebuild our entire deployment and orchestration framework for the sake of containerization, we use a continuous integration pipeline for our containerized services. We automatically build a new Docker image for every commit to a project, and then we tag it with the Mercurial (Hg) changeset number of that commit. On the Skycap side, a deployment from a specific Hg revision will then pull the Docker images that are tagged with that same revision number. 



We reuse container images across multiple environments. This requires environment-specific configuration to be injected into each container instance. Until recently, we used similar source-based principles to inject these configuration values: each container would copy relevant configuration files from Hg by cURL-ing raw files from the repo at run time. Network availability and variability are a challenge best avoided, though, so we now load the configuration into Kubernetes’ [**ConfigMap**](https://kubernetes.io/blog/2016/04/configuration-management-with-containers) feature. This not only simplifies our Docker images, but it also makes pod startup faster and more predictable (because containers don’t have to download files from Hg).   



**Inter-service communication**



Our services communicate using two primary methods. The first, message brokering, is typical for process-to-process communication within the Skytap platform. The second is through direct point-to-point TCP connections, which are typical for services that communicate with the outside world (such as web services). We’ll discuss the TCP method in the next section, as a component of infrastructure integration. 



Managing direct connections between pods in a way that services can understand is complicated. Additionally, our containerized services need to communicate with classic VM-based services. To mitigate this complexity, we primarily use our existing message queueing system. This helped us avoid writing a TCP-based service discovery and load balancing system for handling traffic between pods and non-Kubernetes services. 



This reduces our configuration load—services only need to know how to talk to the message queues, rather than to every other service they need to interact with. We have additional flexibility for things like managing the run-state of pods; messages buffer in the queue while nodes are restarting, and we avoid the overhead of re-configuring TCP endpoints each time a pod is added or removed from the cluster. Furthermore, the MQ model allows us to manage load balancing with a more accurate ‘pull’ based approach, in which recipients determine when they are ready to process a new message, instead of using heuristics like ‘least connections’ that simply count the number of open sockets to estimate load.  



Migrating MQ-enabled services to Kubernetes is relatively straightforward compared to migrating services that use the complex TCP-based direct or load balanced connections. Additionally, the isolation provided by the message broker means that the switchover from a classic service to a container-based service is essentially transparent to any other MQ-enabled service. 



**Infrastructure Integration**



As an infrastructure provider, we face some unique challenges in configuring Kubernetes for use with our platform. [AWS](https://aws.amazon.com/) & [GCP](https://cloud.google.com/) provide out-of-box solutions that simplify Kubernetes provisioning but make assumptions about the underlying infrastructure that do not match our reality. Some organizations have purpose-built data centers. This option would have required us to abandon our existing load balancing infrastructure, our Puppet based provisioning system and the expertise we’d built up around these tools. We weren’t interested in abandoning the tools or our vested experience, so we needed a way to manage Kubernetes that could integrate with our world instead of rebuild it.

Title: Inter-Service Communication and Infrastructure Integration with Kubernetes at Skytap
Summary
Skytap addresses inter-service communication in their Kubernetes environment, using both message brokering for internal communication and direct TCP connections for external services. To simplify communication between pods and classic VM-based services, they primarily leverage their existing message queueing system, avoiding the complexity of building a TCP-based service discovery and load balancing system. This approach reduces configuration overhead, provides flexibility in managing pod run-states, and enables more accurate load balancing. For infrastructure integration, Skytap highlights the challenges of configuring Kubernetes with their existing platform, as cloud-based solutions make assumptions that don't align with their infrastructure. They opted for a solution that integrates with their current load balancing infrastructure and Puppet-based provisioning system, rather than replacing them.