Home Explore Blog CI



kubernetes

1st chunk of `content/en/docs/tasks/administer-cluster/memory-manager.md`
2bae445c69ef0f667f67c9bd395ad1d5c4ab247975cdbab30000000100000fbe
---
title: Utilizing the NUMA-aware Memory Manager

reviewers:
- klueska
- derekwaynecarr

content_type: task
min-kubernetes-server-version: v1.32
weight: 410
---

<!-- overview -->

{{< feature-state feature_gate_name="MemoryManager" >}}

The Kubernetes *Memory Manager* enables the feature of guaranteed memory (and hugepages)
allocation for pods in the `Guaranteed` {{< glossary_tooltip text="QoS class" term_id="qos-class" >}}.

The Memory Manager employs hint generation protocol to yield the most suitable NUMA affinity for a pod.
The Memory Manager feeds the central manager (*Topology Manager*) with these affinity hints.
Based on both the hints and Topology Manager policy, the pod is rejected or admitted to the node.

Moreover, the Memory Manager ensures that the memory which a pod requests
is allocated from a minimum number of NUMA nodes.

The Memory Manager is only pertinent to Linux based hosts.

## {{% heading "prerequisites" %}}

{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}

To align memory resources with other requested resources in a Pod spec:

- the CPU Manager should be enabled and proper CPU Manager policy should be configured on a Node.
  See [control CPU Management Policies](/docs/tasks/administer-cluster/cpu-management-policies/);
- the Topology Manager should be enabled and proper Topology Manager policy should be configured on a Node.
  See [control Topology Management Policies](/docs/tasks/administer-cluster/topology-manager/).

Starting from v1.22, the Memory Manager is enabled by default through `MemoryManager`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/).

Preceding v1.22, the `kubelet` must be started with the following flag:

`--feature-gates=MemoryManager=true`

in order to enable the Memory Manager feature.

## How does the Memory Manager Operate?

The Memory Manager currently offers the guaranteed memory (and hugepages) allocation
for Pods in Guaranteed QoS class.
To immediately put the Memory Manager into operation follow the guidelines in the section
[Memory Manager configuration](#memory-manager-configuration), and subsequently,
prepare and deploy a `Guaranteed` pod as illustrated in the section
[Placing a Pod in the Guaranteed QoS class](#placing-a-pod-in-the-guaranteed-qos-class).

The Memory Manager is a Hint Provider, and it provides topology hints for
the Topology Manager which then aligns the requested resources according to these topology hints.
On Linux, it also enforces `cgroups` (i.e. `cpuset.mems`) for pods.
The complete flow diagram concerning pod admission and deployment process is illustrated in
[Memory Manager KEP: Design Overview][4] and below:

![Memory Manager in the pod admission and deployment process](/Users/baehyunsol/Documents/Rust/ragit/sample/kubernetes/./images/docs/memory-manager-diagram.svg)

During this process, the Memory Manager updates its internal counters stored in
[Node Map and Memory Maps][2] to manage guaranteed memory allocation.

The Memory Manager updates the Node Map during the startup and runtime as follows.

### Startup

This occurs once a node administrator employs `--reserved-memory` (section
[Reserved memory flag](#reserved-memory-flag)).
In this case, the Node Map becomes updated to reflect this reservation as illustrated in
[Memory Manager KEP: Memory Maps at start-up (with examples)][5].

The administrator must provide `--reserved-memory` flag when `Static` policy is configured.

### Runtime

Reference [Memory Manager KEP: Memory Maps at runtime (with examples)][6] illustrates
how a successful pod deployment affects the Node Map, and it also relates to
how potential Out-of-Memory (OOM) situations are handled further by Kubernetes or operating system.

Important topic in the context of Memory Manager operation is the management of NUMA groups.
Each time pod's memory request is in excess of single NUMA node capacity, the Memory Manager
attempts to create a group that comprises several NUMA nodes and features extend memory capacity.

Title: Utilizing the NUMA-aware Memory Manager in Kubernetes
Summary
This document outlines how to use the Kubernetes Memory Manager to enable guaranteed memory allocation for pods in the Guaranteed QoS class. It covers prerequisites like enabling the CPU Manager and Topology Manager, enabling the Memory Manager itself, and how the Memory Manager operates in conjunction with the Topology Manager to align resources and manage NUMA affinity. The document also touches on how the Memory Manager updates its internal state during startup and runtime, including handling reserved memory and the creation of NUMA groups.