There are only a few policy-capable networking backends available for Kubernetes today: Romana, [Calico](http://projectcalico.org/), and [Canal](https://github.com/tigera/canal); with [Weave](http://www.weave.works/) indicating support in the near future. Red Hat’s OpenShift includes network policy features as well.
We chose Romana as the back-end for these tests because it configures pods to use natively routable IP addresses in a full L3 configuration. Network policies, therefore, can be applied directly by the host in the Linux kernel using iptables rules. This results is a high performance, easy to manage network.
**Testing performance impact of network policies**
After network policies have been applied, network packets need to be checked against those policies to verify that this type of traffic is permissible. But what is the performance penalty for applying a network policy to every packet? Can we use all the great policy features without impacting application performance? We decided to find out by running some tests.
Before we dive deeper into these tests, it is worth mentioning that ‘performance’ is a tricky thing to measure, network performance especially so.
_Throughput_ (i.e. data transfer speed measured in Gpbs) and _latency_ (time to complete a request) are common measures of network performance. The performance impact of running an overlay network on throughput and latency has been examined previously [here](https://smana.kubespray.io/index.php/posts/kubernetes-net-bench) and [here](http://machinezone.github.io/research/networking-solutions-for-kubernetes/). What we learned from these tests is that Kubernetes networks are generally pretty fast, and servers have no trouble saturating a 1G link, with or without an overlay. It's only when you have 10G networks that you need to start thinking about the overhead of encapsulation.
This is because during a typical network performance benchmark, there’s no application logic for the host CPU to perform, leaving it available for whatever network processing is required. **_For this reason we ran our tests in an operating range that did not saturate the link, or the CPU. This has the effect of isolating the impact of processing network policy rules on the host_**. For these tests we decided to measure latency as measured by the average time required to complete an HTTP request across a range of response sizes.
**Test setup**
- Hardware: Two servers with Intel Core i5-5250U CPUs (2 core, 2 threads per core) running at 1.60GHz, 16GB RAM and 512GB SSD. NIC: Intel Ethernet Connection I218-V (rev 03)
- Ubuntu 14.04.5
- Kubernetes 1.3 for data collection (verified samples on [v1.4.0-beta.5](http://v1.4.0-beta.5/))
- Romana v0.9.3.1
- Client and server load test [software](https://github.com/paninetworks/testing-tools)
For the tests we had a client pod send 2,000 HTTP requests to a server pod. HTTP requests were sent by the client pod at a rate that ensured that neither the server nor network ever saturated. We also made sure each request started a new TCP session by disabling persistent connections (i.e. HTTP [keep-alive](https://en.wikipedia.org/wiki/HTTP_persistent_connection)). We ran each test with different response sizes and measured the average request duration time (how long does it take to complete a request of that size). Finally, we repeated each set of measurements with different policy configurations.
Romana detects Kubernetes network policies when they’re created, translates them to Romana’s own policy format, and then applies them on all hosts. Currently, Kubernetes network policies only apply to ingress traffic. This means that outgoing traffic is not affected.
First, we conducted the test without any policies to establish a baseline. We then ran the test again, increasing numbers of policies for the test's network segment. The policies were of the common “allow traffic for a given protocol and port” format. To ensure packets had to traverse all the policies, we created a number of policies that did not match the packet, and finally a policy that would result in acceptance of the packet.