Home Explore Blog CI



docker

1st chunk of `content/manuals/ai/model-runner/_index.md`
8277147b6068d750238ae14d798ff4e7826b19d36785e9370000000100000fcc
---
title: Docker Model Runner
params:
  sidebar:
    badge:
      color: blue
      text: Beta
    group: AI
weight: 20
description: Learn how to use Docker Model Runner to manage and run AI models.
keywords: Docker, ai, model runner, docker desktop, docker engine, llm
aliases:
  - /desktop/features/model-runner/
  - /model-runner/
---

{{< summary-bar feature_name="Docker Model Runner" >}}

## Key features

- [Pull and push models to and from Docker Hub](https://hub.docker.com/u/ai)
- Run and interact with AI models directly from the command line or from the Docker Desktop GUI
- Manage local models and display logs

## How it works

Models are pulled from Docker Hub the first time they're used and stored locally. They're loaded into memory only at runtime when a request is made, and unloaded when not in use to optimize resources. Since models can be large, the initial pull may take some time — but after that, they're cached locally for faster access. You can interact with the model using [OpenAI-compatible APIs](#what-api-endpoints-are-available).

> [!TIP]
>
> Using Testcontainers or Docker Compose?
> [Testcontainers for Java](https://java.testcontainers.org/modules/docker_model_runner/)
> and [Go](https://golang.testcontainers.org/modules/dockermodelrunner/), and
> [Docker Compose](/manuals/compose/how-tos/model-runner.md) now support Docker Model Runner.

## Enable Docker Model Runner

### Enable DMR in Docker Desktop

1. Navigate to the **Beta features** tab in settings.
2. Tick the **Enable Docker Model Runner** setting.
3. If you are running on Windows with a supported NVIDIA GPU, you should also see and be able to tick the **Enable GPU-backed inference** setting.

You can now use the `docker model` command in the CLI and view and interact with your local models in the **Models** tab in the Docker Desktop Dashboard.

> [!IMPORTANT]
>
> For Docker Desktop versions 4.41 and earlier, this settings lived under the **Experimental features** tab on the **Features in development** page.

### Enable DMR in Docker Engine

1. Ensure you have installed [Docker Engine](/engine/install/).
2. DMR is available as a package. To install it, run:

   {{< tabs >}}
   {{< tab name="Ubuntu/Debian">}}

   ```console
   $ sudo apt-get update
   $ sudo apt-get install docker-model-plugin
   ```

   {{< /tab >}}
   {{< tab name="RPM-base distributions">}}

   ```console
   $ sudo dnf update
   $ sudo dnf install docker-model-plugin
   ```

   {{< /tab >}}
   {{< /tabs >}}

3. Test the installation:

   ```console
   $ docker model version
   $ docker model run ai/smollm2
   ```

## Pull a model

Models are cached locally.

{{< tabs group="release" >}}
{{< tab name="From Docker Desktop">}}

1. Select **Models** and select the **Docker Hub** tab.
2. Find the model of your choice and select **Pull**.

{{< /tab >}}
{{< tab name="From the Docker CLI">}}

Use the [`docker model pull` command](/reference/cli/docker/).

{{< /tab >}}
{{< /tabs >}}

## Run a model

{{< tabs group="release" >}}
{{< tab name="From Docker Desktop">}}

Select **Models** and select the **Local** tab and click the play button.
The interactive chat screen opens.

{{< /tab >}}
{{< tab name="From the Docker CLI" >}}

Use the [`docker model run` command](/reference/cli/docker/).

{{< /tab >}}
{{< /tabs >}}

## Troubleshooting

To troubleshoot potential issues, display the logs:

{{< tabs group="release" >}}
{{< tab name="From Docker Desktop">}}

Select **Models** and select the **Logs** tab.

{{< /tab >}}
{{< tab name="From the Docker CLI">}}

Use the [`docker model log` command](/reference/cli/docker/).

{{< /tab >}}
{{< /tabs >}}

## Example: Integrate Docker Model Runner into your software development lifecycle

You can now start building your Generative AI application powered by the Docker Model Runner.

If you want to try an existing GenAI application, follow these instructions.

1. Set up the sample app. Clone and run the following repository:

   ```console
   $ git clone https://github.com/docker/hello-genai.git

Title: Docker Model Runner: Manage and Run AI Models
Summary
This document introduces Docker Model Runner, a feature for managing and running AI models within Docker. It describes key features like pulling models from Docker Hub, running models from the command line or Docker Desktop GUI, and managing local models. It also provides instructions on how to enable Docker Model Runner in both Docker Desktop and Docker Engine, pull and run models, troubleshoot issues using logs, and integrate it into the software development lifecycle with a sample GenAI application.