Home Explore Blog CI



kubernetes

6th chunk of `content/en/blog/_posts/2017-09-00-Kubernetes-Statefulsets-Daemonsets.md`
4e65052bd421e405333891e41d8f0998fa4aa6a3a848df770000000100001153
               cpu: 250m


               memory: 1Gi

 ```

## Rolling out a canary

Often, we want to verify an image update or configuration change on a single instance of an application before rolling it out globally. If you modify the partition created above to be 2, the StatefulSet controller will roll out a [canary](http://whatis.techtarget.com/definition/canary-canary-testing) that can be used to verify that the update is working as intended.



```  
$ kubectl patch sts kafka -p '{"spec":{"updateStrategy":{"type":"RollingUpdate","rollingUpdate":{"partition":2}}}}'

statefulset "kafka" patched
 ```




You can watch the StatefulSet controller update the kafka-2 Pod and pause after the update is complete.



```  
$   kubectl get po -lapp=kafka -w


NAME           READY         STATUS       RESTARTS     AGE


kafka-0     1/1             Running     0                   50m


kafka-1     1/1             Running     0                   10m


kafka-2     1/1             Running     0                   29s


kafka-2     1/1             Terminating     0                 34s


kafka-2     0/1             Terminating     0                 38s


kafka-2     0/1             Terminating     0                 39s


kafka-2     0/1             Terminating     0                 39s


kafka-2     0/1             Pending     0                 0s


kafka-2     0/1             Pending     0                 0s


kafka-2     0/1             Terminating     0                 20s


kafka-2     0/1             Terminating     0                 20s


kafka-2     0/1             Pending     0                 0s


kafka-2     0/1             Pending     0                 0s


kafka-2     0/1             ContainerCreating     0                 0s


kafka-2     0/1             Running     0                 19s


kafka-2     1/1             Running     0                 22s

 ```

## Phased roll outs

Similar to rolling out a canary, you can roll out updates based on a phased progression (e.g. linear, geometric, or exponential roll outs).



If you patch the kafka StatefulSet to set the partition to 1, the StatefulSet controller updates one more broker.



```  
$ kubectl patch sts kafka -p '{"spec":{"updateStrategy":{"type":"RollingUpdate","rollingUpdate":{"partition":1}}}}'

statefulset "kafka" patched
 ```




If you set it to 0, the StatefulSet controller updates the final broker and completes the update.



```  
$ kubectl patch sts kafka -p '{"spec":{"updateStrategy":{"type":"RollingUpdate","rollingUpdate":{"partition":0}}}}'

statefulset "kafka" patched
 ```




Note that you don’t have to decrement the partition by one. For a larger StatefulSet--for example, one with 100 replicas--you might use a progression more like 100, 99, 90, 50, 0. In this case, you would stage your update, deploy a canary, roll out to 10 instances, update fifty percent of the Pods, and then complete the update.

## Cleaning up

To delete the API Objects created above, you can use kubectl delete on the two manifests you used to create the ZooKeeper ensemble and the Kafka cluster.



```  
$ kubectl delete -f kafka\_mini.yaml

service "kafka-hs" deleted

poddisruptionbudget "kafka-pdb" deleted

Statefulset “kafka” deleted


$ kubectl delete -f zookeeper\_mini.yaml

service "zk-hs" deleted

service "zk-cs" deleted

poddisruptionbudget "zk-pdb" deleted

statefulset "zk" deleted
 ```




By design, the StatefulSet controller does not delete any persistent volume claims (PVCs): the PVCs created for the ZooKeeper ensemble and the Kafka cluster must be manually deleted. Depending on the storage reclamation policy of your cluster, you many also need to manually delete the backing PVs.

# DaemonSet rolling update, history, and rollback

In this section, we’re going to show you how to perform a rolling update on a DaemonSet, look at its history, and then perform a rollback after a bad rollout. We will use a DaemonSet to deploy a [Prometheus node exporter](https://github.com/prometheus/node_exporter) on each Kubernetes node in the cluster. These node exporters export node metrics to the Prometheus monitoring system. For the sake of simplicity, we’ve omitted the installation of the [Prometheus server](https://github.com/prometheus/prometheus) and the service for [communication with DaemonSet pods](/docs/concepts/workloads/controllers/daemonset/#communicating-with-daemon-pods) from this blogpost.

Title: Phased Rollouts, Cleanup, and DaemonSet Rolling Updates
Summary
This section covers phased rollouts using StatefulSets by adjusting the partition value in the RollingUpdate strategy. It demonstrates how to update Pods in stages. The section also details how to clean up resources created during the demonstration, including StatefulSets, services, and PodDisruptionBudgets, while noting that PersistentVolumeClaims must be manually deleted. Finally, it introduces DaemonSet rolling updates, history, and rollbacks, using a Prometheus node exporter example.