---
title: " High performance network policies in Kubernetes clusters "
date: 2016-09-21
slug: high-performance-network-policies-kubernetes
url: /blog/2016/09/High-Performance-Network-Policies-Kubernetes
author: >
Juergen Brendel (Pani Networks),
Pritesh Kothari (Pani Networks),
Chris Marin (Pani Networks)
---
**Network Policies**
Since the release of Kubernetes 1.3 back in July, users have been able to define and enforce network policies in their clusters. These policies are firewall rules that specify permissible types of traffic to, from and between pods. If requested, Kubernetes blocks all traffic that is not explicitly allowed. Policies are applied to groups of pods identified by common labels. Labels can then be used to mimic traditional segmented networks often used to isolate layers in a multi-tier application: You might identify your front-end and back-end pods by a specific “segment” label, for example. Policies control traffic between those segments and even traffic to or from external sources.
**Segmenting traffic**
What does this mean for the application developer? At last, Kubernetes has gained the necessary capabilities to provide "[defence in depth](https://en.wikipedia.org/wiki/Defense_in_depth_(computing))". Traffic can be segmented and different parts of your application can be secured independently. For example, you can very easily protect each of your services via specific network policies: All the pods identified by a [Replication Controller](/docs/user-guide/replication-controller/) behind a service are already identified by a specific label. Therefore, you can use this same label to apply a policy to those pods.
Defense in depth has long been recommended as best [practice](https://kubernetes.io/blog/2016/08/security-best-practices-kubernetes-deployment). This kind of isolation between different parts or layers of an application is easily achieved on AWS and OpenStack by applying security groups to VMs.
However, prior to network policies, this kind of isolation for containers was not possible. VXLAN overlays can provide simple network isolation, but application developers need more fine grained control over the traffic accessing pods. As you can see in this simple example, Kubernetes network policies can manage traffic based on source and origin, protocol and port.
```
apiVersion: extensions/v1beta1
kind: NetworkPolicy
metadata:
name: pol1
spec:
podSelector:
matchLabels:
role: backend
ingress:
- from:
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: tcp
port: 80
```
**Not all network backends support policies**
Network policies are an exciting feature, which the Kubernetes community has worked on for a long time. However, it requires a networking backend that is capable of applying the policies. By themselves, simple routed networks or the commonly used [flannel](https://github.com/coreos/flannel) network driver, for example, cannot apply network policy.
There are only a few policy-capable networking backends available for Kubernetes today: Romana, [Calico](http://projectcalico.org/), and [Canal](https://github.com/tigera/canal); with [Weave](http://www.weave.works/) indicating support in the near future. Red Hat’s OpenShift includes network policy features as well.
We chose Romana as the back-end for these tests because it configures pods to use natively routable IP addresses in a full L3 configuration. Network policies, therefore, can be applied directly by the host in the Linux kernel using iptables rules. This results is a high performance, easy to manage network.
**Testing performance impact of network policies**
After network policies have been applied, network packets need to be checked against those policies to verify that this type of traffic is permissible. But what is the performance penalty for applying a network policy to every packet? Can we use all the great policy features without impacting application performance? We decided to find out by running some tests.