Home Explore Blog CI



docker

1st chunk of `content/guides/genai-pdf-bot/containerize.md`
94add03f9f901fb7ee81ab0ecb4ddbfec6fc6402201548450000000100000bf5
---
title: Containerize a generative AI application
linkTitle: Containerize your app
weight: 10
keywords: python, generative ai, genai, llm, neo4j, ollama, containerize, initialize, langchain, openai
description: Learn how to containerize a generative AI (GenAI) application.
aliases:
  - /guides/use-case/genai-pdf-bot/containerize/
---

## Prerequisites

> [!NOTE]
>
> GenAI applications can often benefit from GPU acceleration. Currently Docker Desktop supports GPU acceleration only on [Windows with the WSL2 backend](/manuals/desktop/features/gpu.md#using-nvidia-gpus-with-wsl2). Linux users can also access GPU acceleration using a native installation of the [Docker Engine](/manuals/engine/install/_index.md).

- You have installed the latest version of [Docker Desktop](/get-started/get-docker.md) or, if you are a Linux user and are planning to use GPU acceleration, [Docker Engine](/manuals/engine/install/_index.md). Docker adds new features regularly and some parts of this guide may work only with the latest version of Docker Desktop.
- You have a [git client](https://git-scm.com/downloads). The examples in this section use a command-line based git client, but you can use any client.

## Overview

This section walks you through containerizing a generative AI (GenAI) application using Docker Desktop.

> [!NOTE]
>
> You can see more samples of containerized GenAI applications in the [GenAI Stack](https://github.com/docker/genai-stack) demo applications.

## Get the sample application

The sample application used in this guide is a modified version of the PDF Reader application from the [GenAI Stack](https://github.com/docker/genai-stack) demo applications. The application is a full stack Python application that lets you ask questions about a PDF file.

The application uses [LangChain](https://www.langchain.com/) for orchestration, [Streamlit](https://streamlit.io/) for the UI, [Ollama](https://ollama.ai/) to run the LLM, and [Neo4j](https://neo4j.com/) to store vectors.

Clone the sample application. Open a terminal, change directory to a directory that you want to work in, and run the following command to clone the repository:

```console
$ git clone https://github.com/craig-osterhout/docker-genai-sample
```

You should now have the following files in your `docker-genai-sample` directory.

```text
├── docker-genai-sample/
│ ├── .gitignore
│ ├── app.py
│ ├── chains.py
│ ├── env.example
│ ├── requirements.txt
│ ├── util.py
│ ├── LICENSE
│ └── README.md
```

## Initialize Docker assets

Now that you have an application, you can use `docker init` to create the necessary Docker assets to containerize your application. Inside the `docker-genai-sample` directory, run the `docker init` command. `docker init` provides some default configuration, but you'll need to answer a few questions about your application. For example, this application uses Streamlit to run. Refer to the following `docker init` example and use the same answers for your prompts.

Title: Containerizing a Generative AI Application: Prerequisites and Setup
Summary
This section guides you through containerizing a generative AI application using Docker Desktop. It outlines the prerequisites, which include having Docker Desktop installed (with GPU acceleration considerations for Windows WSL2 or native Linux installations) and a git client. The guide uses a modified PDF Reader application from the GenAI Stack demo, which is a Python application using LangChain, Streamlit, Ollama, and Neo4j. The steps involve cloning the sample application repository and initializing Docker assets using the `docker init` command.