### Control service placement
Swarm services provide a few different ways for you to control scale and
placement of services on different nodes.
- You can specify whether the service needs to run a specific number of replicas
or should run globally on every worker node. See
[Replicated or global services](#replicated-or-global-services).
- You can configure the service's
[CPU or memory requirements](#reserve-memory-or-cpus-for-a-service), and the
service only runs on nodes which can meet those requirements.
- [Placement constraints](#placement-constraints) let you configure the service
to run only on nodes with specific (arbitrary) metadata set, and cause the
deployment to fail if appropriate nodes do not exist. For instance, you can
specify that your service should only run on nodes where an arbitrary label
`pci_compliant` is set to `true`.
- [Placement preferences](#placement-preferences) let you apply an arbitrary
label with a range of values to each node, and spread your service's tasks
across those nodes using an algorithm. Currently, the only supported algorithm
is `spread`, which tries to place them evenly. For instance, if you
label each node with a label `rack` which has a value from 1-10, then specify
a placement preference keyed on `rack`, then service tasks are placed as
evenly as possible across all nodes with the label `rack`, after taking other
placement constraints, placement preferences, and other node-specific
limitations into account.
Unlike constraints, placement preferences are best-effort, and a service does
not fail to deploy if no nodes can satisfy the preference. If you specify a
placement preference for a service, nodes that match that preference are
ranked higher when the swarm managers decide which nodes should run the
service tasks. Other factors, such as high availability of the service,
also factor into which nodes are scheduled to run service tasks. For
example, if you have N nodes with the rack label (and then some others), and
your service is configured to run N+1 replicas, the +1 is scheduled on a
node that doesn't already have the service on it if there is one, regardless
of whether that node has the `rack` label or not.
#### Replicated or global services
Swarm mode has two types of services: replicated and global. For replicated
services, you specify the number of replica tasks for the swarm manager to
schedule onto available nodes. For global services, the scheduler places one
task on each available node that meets the service's
[placement constraints](#placement-constraints) and
[resource requirements](#reserve-memory-or-cpus-for-a-service).
You control the type of service using the `--mode` flag. If you don't specify a
mode, the service defaults to `replicated`. For replicated services, you specify
the number of replica tasks you want to start using the `--replicas` flag. For
example, to start a replicated nginx service with 3 replica tasks:
```console
$ docker service create \
--name my_web \
--replicas 3 \
nginx
```
To start a global service on each available node, pass `--mode global` to
`docker service create`. Every time a new node becomes available, the scheduler
places a task for the global service on the new node. For example to start a
service that runs alpine on every node in the swarm:
```console
$ docker service create \
--name myservice \
--mode global \
alpine top
```
Service constraints let you set criteria for a node to meet before the scheduler
deploys a service to the node. You can apply constraints to the
service based upon node attributes and metadata or engine metadata. For more
information on constraints, refer to the `docker service create`
[CLI reference](/reference/cli/docker/service/create.md).
#### Reserve memory or CPUs for a service
To reserve a given amount of memory or number of CPUs for a service, use the
`--reserve-memory` or `--reserve-cpu` flags. If no available nodes can satisfy