---
title: " Kubespray Ansible Playbooks foster Collaborative Kubernetes Ops "
date: 2017-05-19
slug: kubespray-ansible-collaborative-kubernetes-ops
url: /blog/2017/05/Kubespray-Ansible-Collaborative-Kubernetes-Ops
author: >
Rob Hirschfeld (RackN)
---
**Why Kubespray?**
Making Kubernetes operationally strong is a widely held priority and I track many deployment efforts around the project. The [incubated Kubespray project](https://github.com/kubernetes-incubator/kubespray) is of particular interest for me because it uses the popular Ansible toolset to build robust, upgradable clusters on both cloud and physical targets. I believe using tools familiar to operators grows our community.
We’re excited to see the breadth of platforms enabled by Kubespray and how well it handles a wide range of options like integrating Ceph for [StatefulSet](/docs/concepts/workloads/controllers/statefulset/) persistence and Helm for easier application uploads. Those additions have allowed us to fully integrate the [OpenStack Helm charts](https://github.com/att-comdev/openstack-helm) ([demo video](https://www.youtube.com/watch?v=wZ0vMrdx4a4&list=PLXPBeIrpXjfjabMbwYyDULOX3kZmlxEXK&index=2)).
By working with the upstream source instead of creating different install scripts, we get the benefits of a larger community. This requires some extra development effort; however, we believe helping share operational practices makes the whole community stronger. That was also the motivation behind the [SIG-Cluster Ops](https://github.com/kubernetes/community/tree/master/sig-cluster-ops).
**With Kubespray delivering robust installs, we can focus on broader operational concerns.**
For example, we can now drive parallel deployments, so it’s possible to fully exercise the options enabled by Kubespray simultaneously for development and testing.
That’s helpful to built-test-destroy coordinated Kubernetes installs on CentOS, Red Hat and Ubuntu as part of an automation pipeline. We can also set up a full classroom environment from a single command using [Digital Rebar’s](https://github.com/digitalrebar/digitalrebar) providers, tenants and cluster definition JSON.
**Let’s explore the classroom example:**
First, we define a [student cluster in JSON](https://github.com/digitalrebar/digitalrebar/blob/master/deploy/workloads/cluster/deploy-001.json) like the snippet below
|
{
"attribs": {
"k8s-version": "v1.6.0",
"k8s-kube\_network\_plugin": "calico",
"k8s-docker\_version": "1.12"
},
"name": "cluster01",
"tenant": "cluster01",
"public\_keys": {
"cluster01": "ssh-rsa AAAAB..... user@example.com"
},
"provider": {
"name": "google-provider"
},
"nodes": [
{
"roles": ["etcd","k8s-addons", "k8s-master"],