---
title: Using eBPF in Kubernetes
date: 2017-12-07
slug: using-ebpf-in-kubernetes
url: /blog/2017/12/Using-Ebpf-In-Kubernetes
---
## Introduction
Kubernetes provides a high-level API and a set of components that hides almost all of the intricate and—to some of us—interesting details of what happens at the systems level. Application developers are not required to have knowledge of the machines' IP tables, cgroups, namespaces, seccomp, or, nowadays, even the [container runtime](https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes) that their application runs on top of. But underneath, Kubernetes and the technologies upon which it relies (for example, the container runtime) heavily leverage core Linux functionalities.
This article focuses on a core Linux functionality increasingly used in networking, security and auditing, and tracing and monitoring tools. This functionality is called [extended Berkeley Packet Filter](http://man7.org/linux/man-pages/man2/bpf.2.html) (eBPF)
**Note:** _In this article we use both acronyms: eBPF and BPF. The former is used for the extended BPF functionality, and the latter for "classic" BPF functionality._
## What is BPF?
BPF is a mini-VM residing in the Linux kernel that runs BPF programs. Before running, BPF programs are loaded with the [bpf()](http://man7.org/linux/man-pages/man2/bpf.2.html) syscall and are validated for safety: checking for loops, code size, etc. BPF programs are attached to kernel objects and executed when events happen on those objects—for example, when a network interface emits a packet.
## BPF Superpowers
BPF programs are event-driven by definition, an incredibly powerful concept, and executes code in the kernel when an event occurs. [Netflix's Brendan Gregg](http://www.brendangregg.com/bio.html) refers to BPF as a [Linux superpower](http://www.brendangregg.com/blog/2016-03-05/linux-bpf-superpowers.html).
## The 'e' in eBPF
Traditionally, BPF could only be attached to sockets for socket filtering. BPF’s first use case was in `tcpdump`. When you run `tcpdump` the filter is compiled into a BPF program and attached to a raw `AF_PACKET` socket in order to print out filtered packets.
But over the years, eBPF added the ability to attach to [other kernel objects](https://github.com/torvalds/linux/blob/v4.14/include/uapi/linux/bpf.h#L117-L133). In addition to socket filtering, some supported attach points are:
- Kprobes (and userspace equivalents uprobes)
- Tracepoints
- Network schedulers or qdiscs for classification or action (tc)
- XDP (eXpress Data Path)
This and other, newer features like in-kernel helper functions and shared data-structures (maps) that can be used to communicate with user space, extend BPF’s capabilities.
## Existing Use Cases of eBPF with Kubernetes
Several open-source Kubernetes tools already use eBPF and many use cases warrant a closer look, especially in areas such as networking, monitoring and security tools.
## Dynamic Network Control and Visibility with Cilium
[Cilium](https://github.com/cilium/cilium) is a networking project that makes heavy use of eBPF superpowers to route and filter network traffic for container-based systems. By using eBPF, Cilium can dynamically generate and apply rules—even at the device level with XDP—without making changes to the Linux kernel itself.
The Cilium Agent runs on each host. Instead of managing IP tables, it translates network policy definitions to BPF programs that are loaded into the kernel and attached to a container's virtual ethernet device. These programs are executed—rules applied—on each packet that is sent or received.
This diagram shows how the Cilium project works:
