Home Explore Blog Models CI



kubernetes

3rd chunk of `content/en/docs/tasks/administer-cluster/dns-horizontal-autoscaling.md`
93759c3c37dfc78339a63afa19385d6e589b5d90f09ddf3c0000000100000c82
The idea is that when a cluster is using nodes that have many cores,
`coresPerReplica` dominates. When a cluster is using nodes that have fewer
cores, `nodesPerReplica` dominates.

There are other supported scaling patterns. For details, see
[cluster-proportional-autoscaler](https://github.com/kubernetes-sigs/cluster-proportional-autoscaler).

## Disable DNS horizontal autoscaling

There are a few options for tuning DNS horizontal autoscaling. Which option to
use depends on different conditions.

### Option 1: Scale down the kube-dns-autoscaler deployment to 0 replicas

This option works for all situations. Enter this command:

```shell
kubectl scale deployment --replicas=0 kube-dns-autoscaler --namespace=kube-system
```

The output is:

    deployment.apps/kube-dns-autoscaler scaled

Verify that the replica count is zero:

```shell
kubectl get rs --namespace=kube-system
```

The output displays 0 in the DESIRED and CURRENT columns:

    NAME                                  DESIRED   CURRENT   READY   AGE
    ...
    kube-dns-autoscaler-6b59789fc8        0         0         0       ...
    ...

### Option 2: Delete the kube-dns-autoscaler deployment

This option works if kube-dns-autoscaler is under your own control, which means
no one will re-create it:

```shell
kubectl delete deployment kube-dns-autoscaler --namespace=kube-system
```

The output is:

    deployment.apps "kube-dns-autoscaler" deleted

### Option 3: Delete the kube-dns-autoscaler manifest file from the master node

This option works if kube-dns-autoscaler is under control of the (deprecated)
[Addon Manager](https://git.k8s.io/kubernetes/cluster/addons/README.md),
and you have write access to the master node.

Sign in to the master node and delete the corresponding manifest file.
The common path for this kube-dns-autoscaler is:

    /etc/kubernetes/addons/dns-horizontal-autoscaler/dns-horizontal-autoscaler.yaml

After the manifest file is deleted, the Addon Manager will delete the
kube-dns-autoscaler Deployment.



<!-- discussion -->

## Understanding how DNS horizontal autoscaling works

* The cluster-proportional-autoscaler application is deployed separately from
the DNS service.

* An autoscaler Pod runs a client that polls the Kubernetes API server for the
number of nodes and cores in the cluster.

* A desired replica count is calculated and applied to the DNS backends based on
the current schedulable nodes and cores and the given scaling parameters.

* The scaling parameters and data points are provided via a ConfigMap to the
autoscaler, and it refreshes its parameters table every poll interval to be up
to date with the latest desired scaling parameters.

* Changes to the scaling parameters are allowed without rebuilding or restarting
the autoscaler Pod.

* The autoscaler provides a controller interface to support two control
patterns: *linear* and *ladder*.



## {{% heading "whatsnext" %}}

* Read about [Guaranteed Scheduling For Critical Add-On Pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/).
* Learn more about the
[implementation of cluster-proportional-autoscaler](https://github.com/kubernetes-sigs/cluster-proportional-autoscaler).


Title: Disabling and Understanding DNS Horizontal Autoscaling
Summary
This section outlines three options to disable DNS horizontal autoscaling: scaling down the kube-dns-autoscaler deployment to 0 replicas, deleting the kube-dns-autoscaler deployment, or deleting the kube-dns-autoscaler manifest file from the master node (if managed by the Addon Manager). It then explains how DNS horizontal autoscaling works, detailing the cluster-proportional-autoscaler application, polling of the Kubernetes API server, calculation and application of the desired replica count, use of ConfigMap for scaling parameters, and controller interface supporting linear and ladder control patterns. Finally, it suggests further reading on guaranteed scheduling for critical add-on pods and the implementation of cluster-proportional-autoscaler.