By design, the StatefulSet controller does not delete any persistent volume claims (PVCs): the PVCs created for the ZooKeeper ensemble and the Kafka cluster must be manually deleted. Depending on the storage reclamation policy of your cluster, you many also need to manually delete the backing PVs.
# DaemonSet rolling update, history, and rollback
In this section, we’re going to show you how to perform a rolling update on a DaemonSet, look at its history, and then perform a rollback after a bad rollout. We will use a DaemonSet to deploy a [Prometheus node exporter](https://github.com/prometheus/node_exporter) on each Kubernetes node in the cluster. These node exporters export node metrics to the Prometheus monitoring system. For the sake of simplicity, we’ve omitted the installation of the [Prometheus server](https://github.com/prometheus/prometheus) and the service for [communication with DaemonSet pods](/docs/concepts/workloads/controllers/daemonset/#communicating-with-daemon-pods) from this blogpost.
## Prerequisites
To follow along with this section of the blog, you need a working Kubernetes 1.7 cluster and kubectl version 1.7 or later. If you followed along with the first section, you can use the same cluster.
## DaemonSet rolling upFirst, prepare the node exporter DaemonSet manifest to run a v0.13 Prometheus node exporter on every node in the cluster:
```
$ cat \>\> node-exporter-v0.13.yaml \<\<EOF
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: node-exporter
spec:
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: node-exporter
name: node-exporter
spec:
containers:
- image: prom/node-exporter:v0.13.0
name: node-exporter
ports:
- containerPort: 9100
hostPort: 9100
name: scrape
hostNetwork: true
hostPID: true
EOF
```
Note that you need to enable the DaemonSet rolling update feature by explicitly setting DaemonSet .spec.updateStrategy.type to RollingUpdate.
Apply the manifest to create the node exporter DaemonSet:
```
$ kubectl apply -f node-exporter-v0.13.yaml --record
daemonset "node-exporter" created
```
Wait for the first DaemonSet rollout to complete:
```
$ kubectl rollout status ds node-exporter
daemon set "node-exporter" successfully rolled out
```
You should see each of your node runs one copy of the node exporter pod:
```
$ kubectl get pods -l app=node-exporter -o wide
```
To perform a rolling update on the node exporter DaemonSet, prepare a manifest that includes the v0.14 Prometheus node exporter:
```
$ cat node-exporter-v0.13.yaml ``` sed "s/v0.13.0/v0.14.0/g" \> node-exporter-v0.14.yaml
```
Then apply the v0.14 node exporter DaemonSet:
```
$ kubectl apply -f node-exporter-v0.14.yaml --record
daemonset "node-exporter" configured
```
Wait for the DaemonSet rolling update to complete:
```
$ kubectl rollout status ds node-exporter
...
Waiting for rollout to finish: 3 out of 4 new pods have been updated...
Waiting for rollout to finish: 3 of 4 updated pods are available...
daemon set "node-exporter" successfully rolled out
```
We just triggered a DaemonSet rolling update by updating the DaemonSet template. By default, one old DaemonSet pod will be killed and one new DaemonSet pod will be created at a time.
Now we’ll cause a rollout to fail by updating the image to an invalid value:
```
$ cat node-exporter-v0.13.yaml | sed "s/v0.13.0/bad/g" \> node-exporter-bad.yaml
$ kubectl apply -f node-exporter-bad.yaml --record
daemonset "node-exporter" configured
```
Notice that the rollout never finishes:
```
$ kubectl rollout status ds node-exporter
Waiting for rollout to finish: 0 out of 4 new pods have been updated...
Waiting for rollout to finish: 1 out of 4 new pods have been updated…