Home Explore Blog CI



docker

2nd chunk of `content/manuals/compose/how-tos/gpu-support.md`
ab0e2bda6e2b5460aa1bb70188016bf7c9e96c3299ee2416000000010000094c
              capabilities: [gpu]
```

Run with Docker Compose:

```console
$ docker compose up
Creating network "gpu_default" with the default driver
Creating gpu_test_1 ... done
Attaching to gpu_test_1    
test_1  | +-----------------------------------------------------------------------------+
test_1  | | NVIDIA-SMI 450.80.02    Driver Version: 450.80.02    CUDA Version: 11.1     |
test_1  | |-------------------------------+----------------------+----------------------+
test_1  | | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
test_1  | | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
test_1  | |                               |                      |               MIG M. |
test_1  | |===============================+======================+======================|
test_1  | |   0  Tesla T4            On   | 00000000:00:1E.0 Off |                    0 |
test_1  | | N/A   23C    P8     9W /  70W |      0MiB / 15109MiB |      0%      Default |
test_1  | |                               |                      |                  N/A |
test_1  | +-------------------------------+----------------------+----------------------+
test_1  |                                                                                
test_1  | +-----------------------------------------------------------------------------+
test_1  | | Processes:                                                                  |
test_1  | |  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
test_1  | |        ID   ID                                                   Usage      |
test_1  | |=============================================================================|
test_1  | |  No running processes found                                                 |
test_1  | +-----------------------------------------------------------------------------+
gpu_test_1 exited with code 0

```

On machines hosting multiple GPUs, the `device_ids` field can be set to target specific GPU devices and `count` can be used to limit the number of GPU devices assigned to a service container. 

You can use `count` or `device_ids` in each of your service definitions. An error is returned if you try to combine both, specify an invalid device ID, or use a value of count that’s higher than the number of GPUs in your system.

Title: Running a Service with GPU Access and Targeting Specific GPUs
Summary
This section provides an example of running a Docker service with GPU access using Docker Compose. It demonstrates the output of the `docker compose up` command, showing the successful attachment to the container and the results of the `nvidia-smi` command. It further explains how to target specific GPU devices on multi-GPU machines using the `device_ids` field or limit the number of GPUs assigned to a service container using the `count` field. It emphasizes that `count` and `device_ids` are mutually exclusive and that specifying an invalid device ID or a count higher than the number of GPUs in the system will result in an error.