Home Explore Blog CI



docker

2nd chunk of `content/manuals/ai/model-runner/_index.md`
7c631c3078753a8c497ad69dc7d45b9c37996ab3bdff0c880000000100000db1
{{< tab name="From Docker Desktop">}}

Select **Models** and select the **Local** tab and click the play button.
The interactive chat screen opens.

{{< /tab >}}
{{< tab name="From the Docker CLI" >}}

Use the [`docker model run` command](/reference/cli/docker/).

{{< /tab >}}
{{< /tabs >}}

## Troubleshooting

To troubleshoot potential issues, display the logs:

{{< tabs group="release" >}}
{{< tab name="From Docker Desktop">}}

Select **Models** and select the **Logs** tab.

{{< /tab >}}
{{< tab name="From the Docker CLI">}}

Use the [`docker model log` command](/reference/cli/docker/).

{{< /tab >}}
{{< /tabs >}}

## Example: Integrate Docker Model Runner into your software development lifecycle

You can now start building your Generative AI application powered by the Docker Model Runner.

If you want to try an existing GenAI application, follow these instructions.

1. Set up the sample app. Clone and run the following repository:

   ```console
   $ git clone https://github.com/docker/hello-genai.git
   ```

2. In your terminal, navigate to the `hello-genai` directory.

3. Run `run.sh` for pulling the chosen model and run the app(s):

4. Open you app in the browser at the addresses specified in the repository [README](https://github.com/docker/hello-genai).

You'll see the GenAI app's interface where you can start typing your prompts.

You can now interact with your own GenAI app, powered by a local model. Try a few prompts and notice how fast the responses are — all running on your machine with Docker.

## FAQs

### What models are available?

All the available models are hosted in the [public Docker Hub namespace of `ai`](https://hub.docker.com/u/ai).

### What CLI commands are available?

See [the reference docs](/reference/cli/docker/model/).

### What API endpoints are available?

Once the feature is enabled, new API endpoints are available under the following base URLs:

{{< tabs >}}
{{< tab name="Docker Desktop">}}

- From containers: `http://model-runner.docker.internal/`
- From host processes: `http://localhost:12434/`, assuming TCP host access is
  enabled on the default port (12434).

{{< /tab >}}
{{< tab name="Docker Engine">}}

- From containers: `http://172.17.0.1:12434/` (with `172.17.0.1` representing the host gateway address)
- From host processes: `http://localhost:12434/`

> [!NOTE]
> The `172.17.0.1` interface may not be available by default to containers
 within a Compose project.
> In this case, add an `extra_hosts` directive to your Compose service YAML:
> 
> ```yaml
> extra_hosts:
>   - "model-runner.docker.internal:host-gateway"
> ```
> Then you can access the Docker Model Runner APIs at http://model-runner.docker.internal:12434/

{{< /tab >}}
{{</tabs >}}

Docker Model management endpoints:

```text
POST /models/create
GET /models
GET /models/{namespace}/{name}
DELETE /models/{namespace}/{name}
```

OpenAI endpoints:

```text
GET /engines/llama.cpp/v1/models
GET /engines/llama.cpp/v1/models/{namespace}/{name}
POST /engines/llama.cpp/v1/chat/completions
POST /engines/llama.cpp/v1/completions
POST /engines/llama.cpp/v1/embeddings
```

To call these endpoints via a Unix socket (`/var/run/docker.sock`), prefix their path with
with `/exp/vDD4.40`.

> [!NOTE]
> You can omit `llama.cpp` from the path. For example: `POST /engines/v1/chat/completions`.

### How do I interact through the OpenAI API?

#### From within a container

To call the `chat/completions` OpenAI endpoint from within another container using `curl`:

Title: Troubleshooting, Integration, and FAQs for Docker Model Runner
Summary
This section provides guidance on troubleshooting Docker Model Runner by viewing logs through the Docker Desktop UI or the command line. It also includes an example on integrating Docker Model Runner into a software development lifecycle with a sample GenAI application. Furthermore, it addresses frequently asked questions about available models, CLI commands, and API endpoints, including instructions on how to interact with the OpenAI API from within a container.