and does not reserve the memory in the internal [NodeMap][2] object.
This policy is only supported on Linux.
#### BestEffort policy {#policy-best-effort}
{{< feature-state feature_gate_name="WindowsCPUAndMemoryAffinity" >}}
This policy is only supported on Windows.
On Windows, NUMA node assignment works differently than Linux.
There is no mechanism to ensure that Memory access only comes from a specific NUMA node.
Instead the Windows scheduler will select the most optimal NUMA node based on the CPU(s) assignments.
It is possible that Windows might use other NUMA nodes if deemed optimal by the Windows scheduler.
The policy does track the amount of memory available and requested through the internal [NodeMap][2].
The memory manager will make a best effort at ensuring that enough memory is available on
a NUMA node before making the assignment.
This means that in most cases memory assignment should function as expected.
### Reserved memory flag
The [Node Allocatable](/docs/tasks/administer-cluster/reserve-compute-resources/) mechanism
is commonly used by node administrators to reserve K8S node system resources for the kubelet
or operating system processes in order to enhance the node stability.
A dedicated set of flags can be used for this purpose to set the total amount of reserved memory
for a node. This pre-configured value is subsequently utilized to calculate
the real amount of node's "allocatable" memory available to pods.
The Kubernetes scheduler incorporates "allocatable" to optimise pod scheduling process.
The foregoing flags include `--kube-reserved`, `--system-reserved` and `--eviction-threshold`.
The sum of their values will account for the total amount of reserved memory.
A new `--reserved-memory` flag was added to Memory Manager to allow for this total reserved memory
to be split (by a node administrator) and accordingly reserved across many NUMA nodes.
The flag specifies a comma-separated list of memory reservations of different memory types per NUMA node.
Memory reservations across multiple NUMA nodes can be specified using semicolon as separator.
This parameter is only useful in the context of the Memory Manager feature.
The Memory Manager will not use this reserved memory for the allocation of container workloads.
For example, if you have a NUMA node "NUMA0" with `10Gi` of memory available, and
the `--reserved-memory` was specified to reserve `1Gi` of memory at "NUMA0",
the Memory Manager assumes that only `9Gi` is available for containers.
You can omit this parameter, however, you should be aware that the quantity of reserved memory
from all NUMA nodes should be equal to the quantity of memory specified by the
[Node Allocatable feature](/docs/tasks/administer-cluster/reserve-compute-resources/).
If at least one node allocatable parameter is non-zero, you will need to specify
`--reserved-memory` for at least one NUMA node.
In fact, `eviction-hard` threshold value is equal to `100Mi` by default, so
if `Static` policy is used, `--reserved-memory` is obligatory.
Also, avoid the following configurations:
1. duplicates, i.e. the same NUMA node or memory type, but with a different value;
1. setting zero limit for any of memory types;
1. NUMA node IDs that do not exist in the machine hardware;
1. memory type names different than `memory` or `hugepages-<size>`
(hugepages of particular `<size>` should also exist).
Syntax:
`--reserved-memory N:memory-type1=value1,memory-type2=value2,...`
* `N` (integer) - NUMA node index, e.g. `0`
* `memory-type` (string) - represents memory type:
* `memory` - conventional memory
* `hugepages-2Mi` or `hugepages-1Gi` - hugepages
* `value` (string) - the quantity of reserved memory, e.g. `1Gi`
Example usage:
`--reserved-memory 0:memory=1Gi,hugepages-1Gi=2Gi`
or
`--reserved-memory 0:memory=1Gi --reserved-memory 1:memory=2Gi`
or
`--reserved-memory '0:memory=1Gi;1:memory=2Gi'`
When you specify values for `--reserved-memory` flag, you must comply with the setting that