Wait for the DaemonSet rolling update to complete:
```
$ kubectl rollout status ds node-exporter
...
Waiting for rollout to finish: 3 out of 4 new pods have been updated...
Waiting for rollout to finish: 3 of 4 updated pods are available...
daemon set "node-exporter" successfully rolled out
```
We just triggered a DaemonSet rolling update by updating the DaemonSet template. By default, one old DaemonSet pod will be killed and one new DaemonSet pod will be created at a time.
Now we’ll cause a rollout to fail by updating the image to an invalid value:
```
$ cat node-exporter-v0.13.yaml | sed "s/v0.13.0/bad/g" \> node-exporter-bad.yaml
$ kubectl apply -f node-exporter-bad.yaml --record
daemonset "node-exporter" configured
```
Notice that the rollout never finishes:
```
$ kubectl rollout status ds node-exporter
Waiting for rollout to finish: 0 out of 4 new pods have been updated...
Waiting for rollout to finish: 1 out of 4 new pods have been updated…
# Use ^C to exit
```
This behavior is expected. We mentioned earlier that a DaemonSet rolling update kills and creates one pod at a time. Because the new pod never becomes available, the rollout is halted, preventing the invalid specification from propagating to more than one node. StatefulSet rolling updates implement the same behavior with respect to failed deployments. Unsuccessful updates are blocked until it corrected via roll back or by rolling forward with a specification.
```
$ kubectl get pods -l app=node-exporter
NAME READY STATUS RESTARTS AGE
node-exporter-f2n14 0/1 ErrImagePull 0 3m
...
# N = number of nodes
$ kubectl get ds node-exporter
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
node-exporter N N N-1 1 N \<none\> 46m
```
## DaemonSet history, rollbacks, and rolling forward
Next, perform a rollback. Take a look at the node exporter DaemonSet rollout history:
```
$ kubectl rollout history ds node-exporter
daemonsets "node-exporter"
REVISION CHANGE-CAUSE
1 kubectl apply --filename=node-exporter-v0.13.yaml --record=true
2 kubectl apply --filename=node-exporter-v0.14.yaml --record=true
3 kubectl apply --filename=node-exporter-bad.yaml --record=true
```
Check the details of the revision you want to roll back to:
```
$ kubectl rollout history ds node-exporter --revision=2
daemonsets "node-exporter" with revision #2
Pod Template:
Labels: app=node-exporter
Containers:
node-exporter:
Image: prom/node-exporter:v0.14.0
Port: 9100/TCP
Environment: \<none\>
Mounts: \<none\>
Volumes: \<none\>
```
You can quickly roll back to any DaemonSet revision you found through kubectl rollout history: