Home Explore Blog CI



kubernetes

2nd chunk of `content/en/blog/_posts/2016-01-00-Why-Kubernetes-Doesnt-Use-Libnetwork.md`
b2012cbeccd27ec8bfb5b99ee9154e26ec4bf94b8b0034cc0000000100000aa1
For users that are willing and able to run the requisite infrastructure to satisfy Docker global drivers and to configure Docker themselves, Docker networking should "just work." Kubernetes will not get in the way of such a setup, and no matter what direction the project goes, that option should be available. For default installations, though, the practical conclusion is that this is an undue burden on users and we therefore cannot use Docker's global drivers (including "overlay"), which eliminates a lot of the value of using Docker's plugins at all.  

Docker's networking model makes a lot of assumptions that aren’t valid for Kubernetes. In docker versions 1.8 and 1.9, it includes a fundamentally flawed implementation of "discovery" that results in corrupted `/etc/hosts` files in containers ([docker #17190](https://github.com/docker/docker/issues/17190)) — and this cannot be easily turned off. In version 1.10 Docker is planning to [bundle a new DNS server](https://github.com/docker/docker/issues/17195), and it’s unclear whether this will be able to be turned off. Container-level naming is not the right abstraction for Kubernetes — we already have our own concepts of service naming, discovery, and binding, and we already have our own DNS schema and server (based on the well-established [SkyDNS](https://github.com/skynetservices/skydns)). The bundled solutions are not sufficient for our needs but are not disableable.  

Orthogonal to the local/global split, Docker has both in-process and out-of-process ("remote") plugins. We investigated whether we could bypass libnetwork (and thereby skip the issues above) and drive Docker remote plugins directly. Unfortunately, this would mean that we could not use any of the Docker in-process plugins, "bridge" and "overlay" in particular, which again eliminates much of the utility of libnetwork.  

On the other hand, CNI is more philosophically aligned with Kubernetes. It's far simpler than CNM, doesn't require daemons, and is at least plausibly cross-platform (CoreOS’s [rkt](https://coreos.com/rkt/docs/) container runtime supports it). Being cross-platform means that there is a chance to enable network configurations which will work the same across runtimes (e.g. Docker, Rocket, Hyper). It follows the UNIX philosophy of doing one thing well.  

Additionally, it's trivial to wrap a CNI plugin and produce a more customized CNI plugin — it can be done with a simple shell script. CNM is much more complex in this regard. This makes CNI an attractive option for rapid development and iteration. Early prototypes have proven that it's possible to eject almost 100% of the currently hard-coded network logic in kubelet into a plugin.  

Title: Docker's Networking Model vs. CNI
Summary
Docker's networking model isn't ideal for Kubernetes due to assumptions that don't align, such as flawed discovery mechanisms and bundled DNS servers that can't be disabled, conflicting with Kubernetes' existing service naming and DNS. While Docker networking can work for users willing to configure it, it's too burdensome as a default. Kubernetes also considered driving Docker remote plugins directly but it would mean not being able to use any of the Docker in-process plugins. The CNI (Container Network Interface) model is simpler, daemon-less, potentially cross-platform, and aligns better with Kubernetes' philosophy. Its simplicity and ease of customization make it attractive for rapid development.