Home Explore Blog CI



kubernetes

3rd chunk of `content/en/blog/_posts/2016-09-00-High-Performance-Network-Policies-Kubernetes.md`
652e3d6c4dc7aab22c4ad6100afb00a8c679618b1cdaf18e0000000100000996
For the tests we had a client pod send 2,000 HTTP requests to a server pod. HTTP requests were sent by the client pod at a rate that ensured that neither the server nor network ever saturated. We also made sure each request started a new TCP session by disabling persistent connections (i.e. HTTP [keep-alive](https://en.wikipedia.org/wiki/HTTP_persistent_connection)). We ran each test with different response sizes and measured the average request duration time (how long does it take to complete a request of that size). Finally, we repeated each set of measurements with different policy configurations.



Romana detects Kubernetes network policies when they’re created, translates them to Romana’s own policy format, and then applies them on all hosts. Currently, Kubernetes network policies only apply to ingress traffic. This means that outgoing traffic is not affected.

First, we conducted the test without any policies to establish a baseline. We then ran the test again, increasing numbers of policies for the test's network segment. The policies were of the common “allow traffic for a given protocol and port” format. To ensure packets had to traverse all the policies, we created a number of policies that did not match the packet, and finally a policy that would result in acceptance of the packet.



The table below shows the results, measured in milliseconds for different request sizes and numbers of policies:



Response Size

|Policies |.5k |1k |10k |100k |1M |
|---|---|---|---|---|
|0 |0.732 |0.738 |1.077 |2.532 |10.487 |
|10 |0.744 |0.742 |1.084 |2.570 |10.556 |
|50 |0.745 |0.755 |1.086 |2.580 |10.566 |
|100 |0.762 |0.770 |1.104 |2.640 |10.597 |
|200 |0.783 |0.783 |1.147 |2.652 |10.677 |



What we see here is that, as the number of policies increases, processing network policies introduces a very small delay, never more than 0.2ms, even after applying 200 policies. For all practical purposes, no meaningful delay is introduced when network policy is applied. Also worth noting is that doubling the response size from 0.5k to 1.0k had virtually no effect. This is because for very small responses, the fixed overhead of creating a new connection dominates the overall response time (i.e. the same number of packets are transferred).



 ![](https://lh3.googleusercontent.com/2M6D3zIPSiBE1LUZ3I5oVlZtfVVGP-aK6P3Qsb_siG0Jy16zeE1pNIZGLxeRh4SLCNUKY53A0Qbcm-dwwqz6ResSLjdb1oosXywOK5oK_uU6inVWQTPtztj9cv_6JK-EESVeeoq9)

Title: Network Policy Performance Test Results and Analysis
Summary
The section describes the test setup, where a client pod sends 2,000 HTTP requests to a server pod at a rate that avoids saturation. Each request initiates a new TCP session. The tests measure average request duration time for different response sizes and policy configurations. Romana detects and applies Kubernetes network policies, affecting only ingress traffic. The baseline test is performed without policies, followed by tests with increasing numbers of policies. The results show that increasing the number of policies introduces a minimal delay (less than 0.2ms even with 200 policies), indicating that network policy application has a negligible performance impact. Doubling the response size from 0.5k to 1.0k had virtually no effect.