Home Explore Blog CI



kubernetes

6th chunk of `content/en/blog/_posts/2016-11-00-Skytap-Modernizing-Microservice-Architecture-With-Kubernetes.md`
d9c71ee9af08e25ed08791455ce0a64cfaae64f3d0ec53a60000000100000480
![kubernetes_f5_messaging.png](https://lh4.googleusercontent.com/2wfBbW3zxYLPg8Xgl6GIAE9Xt9afjZfTAyfR0H6EzfdHAJyDjg7N1RCpZLoLG9N9wVAnsczXUBicJ4QUydCOJ1uZ6A1SP44ki-XAnpDYTiL5cLaXFoi2YtKjKYxC5hFoCoOs7nWM)



With dynamic reconfiguration of the network, the replication mechanics of Kubernetes make horizontal scaling and (most) failover/recovery very straightforward. We haven’t yet reached the reactive scaling milestone, but we've laid the groundwork with the Kubernetes and Calico infrastructure, making one avenue to implement it straightforward:

- Configure upper and lower bounds for service replication
- Build a load analysis and scaling service (easy, right?)
- If load patterns match the configured triggers in the scaling service (for example, request rate or volume above certain bounds), issue: kubectl scale --replicas=COUNT rc NAME

This would allow us fine-grained control of autoscaling at the platform level, instead of from the applications themselves – but we’ll also evaluate [**Horizontal Pod Autoscaling**](/docs/user-guide/horizontal-pod-autoscaling/) in Kubernetes; which may suit our need without a custom service. 

Title: Kubernetes Autoscaling Strategies: Custom Service vs. Horizontal Pod Autoscaling
Summary
Dynamic network reconfiguration with Kubernetes simplifies horizontal scaling and failover. Skytap plans to implement reactive scaling by configuring replication bounds, building a load analysis service, and using `kubectl scale` to adjust pod replicas based on load patterns. They are also considering Kubernetes' Horizontal Pod Autoscaling as an alternative to a custom service for more automated scaling.