Home Explore Blog CI



kubernetes

8th chunk of `content/en/blog/_posts/2017-09-00-Kubernetes-Statefulsets-Daemonsets.md`
2c282a43ad7631c642c3ab4a6bcaf847b3f66f5d3da7b7290000000100000c86
Wait for the DaemonSet rolling update to complete:



```  
$ kubectl rollout status ds node-exporter

...

Waiting for rollout to finish: 3 out of 4 new pods have been updated...  
Waiting for rollout to finish: 3 of 4 updated pods are available...  
daemon set "node-exporter" successfully rolled out
 ```




We just triggered a DaemonSet rolling update by updating the DaemonSet template. By default, one old DaemonSet pod will be killed and one new DaemonSet pod will be created at a time.



Now we’ll cause a rollout to fail by updating the image to an invalid value:



```  
$ cat node-exporter-v0.13.yaml | sed "s/v0.13.0/bad/g" \> node-exporter-bad.yaml


$ kubectl apply -f node-exporter-bad.yaml --record

daemonset "node-exporter" configured
 ```




Notice that the rollout never finishes:



```  
$ kubectl rollout status ds node-exporter   
Waiting for rollout to finish: 0 out of 4 new pods have been updated...  
Waiting for rollout to finish: 1 out of 4 new pods have been updated…

# Use ^C to exit
 ```




This behavior is expected. We mentioned earlier that a DaemonSet rolling update kills and creates one pod at a time. Because the new pod never becomes available, the rollout is halted, preventing the invalid specification from propagating to more than one node. StatefulSet rolling updates implement the same behavior with respect to failed deployments. Unsuccessful updates are blocked until it corrected via roll back or by rolling forward with a specification.



```  
$ kubectl get pods -l app=node-exporter

NAME                                   READY         STATUS                 RESTARTS     AGE


node-exporter-f2n14     0/1             ErrImagePull     0                   3m


...


# N = number of nodes

$ kubectl get ds node-exporter  
NAME                       DESIRED     CURRENT     READY         UP-TO-DATE     AVAILABLE     NODE SELECTOR     AGE  

node-exporter     N                 N                 N-1             1                       N                     \<none\>                   46m

 ```




## DaemonSet history, rollbacks, and rolling forward

Next,   perform a rollback. Take a look at the node exporter DaemonSet rollout history:




```  
$ kubectl rollout history ds node-exporter   
daemonsets "node-exporter"  
REVISION               CHANGE-CAUSE  

1                             kubectl apply --filename=node-exporter-v0.13.yaml --record=true  

2                             kubectl apply --filename=node-exporter-v0.14.yaml --record=true


3                             kubectl apply --filename=node-exporter-bad.yaml --record=true

 ```




Check the details of the revision you want to roll back to:



```  
$ kubectl rollout history ds node-exporter --revision=2  
daemonsets "node-exporter" with revision #2  
Pod Template:  
   Labels:             app=node-exporter  

   Containers:  

     node-exporter:  

       Image:           prom/node-exporter:v0.14.0  

       Port:             9100/TCP  

       Environment:               \<none\>  

       Mounts:         \<none\>  

   Volumes:           \<none\>

 ```




You can quickly roll back to any DaemonSet revision you found through kubectl rollout history:

Title: DaemonSet Rollback and History
Summary
This section describes how to handle a failed DaemonSet rollout by examining the history and rolling back to a previous revision. It details how a failed image deployment halts the rollout, preventing widespread issues. The guide demonstrates checking the rollout history and the details of each revision using `kubectl rollout history`, and then prepares for a rollback to a stable revision.