Home Explore Blog Models CI



kubernetes

1st chunk of `content/en/docs/tasks/administer-cluster/topology-manager.md`
ec7b538f094db51460795b2b10bf1aab7b38680f5bd1ec640000000100000fef
---
title: Control Topology Management Policies on a node
reviewers:
- ConnorDoyle
- klueska
- lmdaly
- nolancon
- bg-chun
content_type: task
min-kubernetes-server-version: v1.18
weight: 150
---

<!-- overview -->

{{< feature-state state="stable" for_k8s_version="v1.27" >}}

An increasing number of systems leverage a combination of CPUs and hardware accelerators to
support latency-critical execution and high-throughput parallel computation. These include
workloads in fields such as telecommunications, scientific computing, machine learning, financial
services and data analytics. Such hybrid systems comprise a high performance environment.

In order to extract the best performance, optimizations related to CPU isolation, memory and
device locality are required. However, in Kubernetes, these optimizations are handled by a
disjoint set of components.

_Topology Manager_ is a kubelet component that aims to coordinate the set of components that are
responsible for these optimizations.

## {{% heading "prerequisites" %}}

{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}

<!-- steps -->

## How topology manager works

Prior to the introduction of Topology Manager, the CPU and Device Manager in Kubernetes make
resource allocation decisions independently of each other. This can result in undesirable
allocations on multiple-socketed systems, and performance/latency sensitive applications will suffer
due to these undesirable allocations. Undesirable in this case meaning, for example, CPUs and
devices being allocated from different NUMA Nodes, thus incurring additional latency.

The Topology Manager is a kubelet component, which acts as a source of truth so that other kubelet
components can make topology aligned resource allocation choices.

The Topology Manager provides an interface for components, called *Hint Providers*, to send and
receive topology information. The Topology Manager has a set of node level policies which are
explained below.

The Topology Manager receives topology information from the *Hint Providers* as a bitmask denoting
NUMA Nodes available and a preferred allocation indication. The Topology Manager policies perform
a set of operations on the hints provided and converge on the hint determined by the policy to
give the optimal result. If an undesirable hint is stored, the preferred field for the hint will be
set to false. In the current policies preferred is the narrowest preferred mask.
The selected hint is stored as part of the Topology Manager. Depending on the policy configured,
the pod can be accepted or rejected from the node based on the selected hint.
The hint is then stored in the Topology Manager for use by the *Hint Providers* when making the
resource allocation decisions.

## Windows Support

{{< feature-state feature_gate_name="WindowsCPUAndMemoryAffinity" >}}

The Topology Manager support can be enabled on Windows by using the `WindowsCPUAndMemoryAffinity` feature gate and
it requires support in the container runtime.

## Topology manager scopes and policies

The Topology Manager currently:

- aligns Pods of all QoS classes.
- aligns the requested resources that Hint Provider provides topology hints for.

If these conditions are met, the Topology Manager will align the requested resources.

In order to customize how this alignment is carried out, the Topology Manager provides two
distinct options: `scope` and `policy`.

The `scope` defines the granularity at which you would like resource alignment to be performed,
for example, at the `pod` or `container` level. And the `policy` defines the actual policy used to
carry out the alignment, for example, `best-effort`, `restricted`, and `single-numa-node`.
Details on the various `scopes` and `policies` available today can be found below.

{{< note >}}
To align CPU resources with other requested resources in a Pod spec, the CPU Manager should be
enabled and proper CPU Manager policy should be configured on a Node.
See [Control CPU Management Policies on the Node](/docs/tasks/administer-cluster/cpu-management-policies/).

Title: Control Topology Management Policies on a Node
Summary
This document describes how to manage topology management policies on a Kubernetes node to optimize performance for latency-critical and high-throughput workloads. It explains how the Topology Manager coordinates CPU isolation, memory, and device locality to improve resource allocation and reduce latency. It details the operation of the Topology Manager, its interaction with Hint Providers, and the configuration options for scope and policy to customize resource alignment.