The Memory Manager updates the Node Map during the startup and runtime as follows.
### Startup
This occurs once a node administrator employs `--reserved-memory` (section
[Reserved memory flag](#reserved-memory-flag)).
In this case, the Node Map becomes updated to reflect this reservation as illustrated in
[Memory Manager KEP: Memory Maps at start-up (with examples)][5].
The administrator must provide `--reserved-memory` flag when `Static` policy is configured.
### Runtime
Reference [Memory Manager KEP: Memory Maps at runtime (with examples)][6] illustrates
how a successful pod deployment affects the Node Map, and it also relates to
how potential Out-of-Memory (OOM) situations are handled further by Kubernetes or operating system.
Important topic in the context of Memory Manager operation is the management of NUMA groups.
Each time pod's memory request is in excess of single NUMA node capacity, the Memory Manager
attempts to create a group that comprises several NUMA nodes and features extend memory capacity.
The problem has been solved as elaborated in
[Memory Manager KEP: How to enable the guaranteed memory allocation over many NUMA nodes?][3].
Also, reference [Memory Manager KEP: Simulation - how the Memory Manager works? (by examples)][1]
illustrates how the management of groups occurs.
### Windows Support
{{< feature-state feature_gate_name="WindowsCPUAndMemoryAffinity" >}}
Windows support can be enabled via the `WindowsCPUAndMemoryAffinity` feature gate
and it requires support in the container runtime.
Only the [BestEffort Policy](#policy-best-effort) is supported on Windows.
## Memory Manager configuration
Other Managers should be first pre-configured. Next, the Memory Manager feature should be enabled
and be run with `Static` policy (section [Static policy](#policy-static)).
Optionally, some amount of memory can be reserved for system or kubelet processes to increase
node stability (section [Reserved memory flag](#reserved-memory-flag)).
### Policies
Memory Manager supports two policies. You can select a policy via a `kubelet` flag `--memory-manager-policy`:
* `None` (default)
* `Static` (Linux only)
* `BestEffort` (Windows Only)
#### None policy {#policy-none}
This is the default policy and does not affect the memory allocation in any way.
It acts the same as if the Memory Manager is not present at all.
The `None` policy returns default topology hint. This special hint denotes that Hint Provider
(Memory Manager in this case) has no preference for NUMA affinity with any resource.
#### Static policy {#policy-static}
In the case of the `Guaranteed` pod, the `Static` Memory Manager policy returns topology hints
relating to the set of NUMA nodes where the memory can be guaranteed,
and reserves the memory through updating the internal [NodeMap][2] object.
In the case of the `BestEffort` or `Burstable` pod, the `Static` Memory Manager policy sends back
the default topology hint as there is no request for the guaranteed memory,
and does not reserve the memory in the internal [NodeMap][2] object.
This policy is only supported on Linux.
#### BestEffort policy {#policy-best-effort}
{{< feature-state feature_gate_name="WindowsCPUAndMemoryAffinity" >}}
This policy is only supported on Windows.
On Windows, NUMA node assignment works differently than Linux.
There is no mechanism to ensure that Memory access only comes from a specific NUMA node.
Instead the Windows scheduler will select the most optimal NUMA node based on the CPU(s) assignments.
It is possible that Windows might use other NUMA nodes if deemed optimal by the Windows scheduler.
The policy does track the amount of memory available and requested through the internal [NodeMap][2].
The memory manager will make a best effort at ensuring that enough memory is available on
a NUMA node before making the assignment.
This means that in most cases memory assignment should function as expected.
### Reserved memory flag
The [Node Allocatable](/docs/tasks/administer-cluster/reserve-compute-resources/) mechanism