| `--ulimit rtprio=<value>` | The maximum real-time priority allowed for the container. You also need the `--cap-add=sys_nice` flag. |
The following example command sets each of these three flags on a `debian:jessie`
container.
```console
$ docker run -it \
--cpu-rt-runtime=950000 \
--ulimit rtprio=99 \
--cap-add=sys_nice \
debian:jessie
```
If the kernel or Docker daemon isn't configured correctly, an error occurs.
## GPU
### Access an NVIDIA GPU
#### Prerequisites
Visit the official [NVIDIA drivers page](https://www.nvidia.com/Download/index.aspx)
to download and install the proper drivers. Reboot your system once you have
done so.
Verify that your GPU is running and accessible.
#### Install nvidia-container-toolkit
Follow the official NVIDIA Container Toolkit [installation instructions](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html).
#### Expose GPUs for use
Include the `--gpus` flag when you start a container to access GPU resources.
Specify how many GPUs to use. For example:
```console
$ docker run -it --rm --gpus all ubuntu nvidia-smi
```
Exposes all available GPUs and returns a result akin to the following:
```bash
+-------------------------------------------------------------------------------+
| NVIDIA-SMI 384.130 Driver Version: 384.130 |
|-------------------------------+----------------------+------------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+========================|
| 0 GRID K520 Off | 00000000:00:03.0 Off | N/A |
| N/A 36C P0 39W / 125W | 0MiB / 4036MiB | 0% Default |
+-------------------------------+----------------------+------------------------+
+-------------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|===============================================================================|
| No running processes found |
+-------------------------------------------------------------------------------+
```
Use the `device` option to specify GPUs. For example:
```console
$ docker run -it --rm --gpus device=GPU-3a23c669-1f69-c64e-cf85-44e9b07e7a2a ubuntu nvidia-smi
```
Exposes that specific GPU.
```console
$ docker run -it --rm --gpus '"device=0,2"' ubuntu nvidia-smi
```
Exposes the first and third GPUs.
> [!NOTE]
>
> NVIDIA GPUs can only be accessed by systems running a single engine.
#### Set NVIDIA capabilities
You can set capabilities manually. For example, on Ubuntu you can run the
following:
```console
$ docker run --gpus 'all,capabilities=utility' --rm ubuntu nvidia-smi
```
This enables the `utility` driver capability which adds the `nvidia-smi` tool to
the container.
Capabilities as well as other configurations can be set in images via
environment variables. More information on valid variables can be found in the
[nvidia-container-toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/docker-specialized.html)
documentation. These variables can be set in a Dockerfile.
You can also use CUDA images, which set these variables automatically. See the
official [CUDA images](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/cuda)
NGC catalog page.