Home Explore Blog CI



docker

3rd chunk of `content/guides/genai-leveraging-rag/index.md`
4d304918b4f227578e06bce6262c5f2c639d457027427b3300000001000008fd
To set up and run the GenAI stack on a Linux host, execute one of the following commands, either for GPU or CPU powered:

### GPU powered

```bash
git clone https://github.com/docker/genai-stack
docker compose --profile linux-gpu up -d
mv env.example .env
nano .env
```
In the `.env` file, make sure following lines are commented out. Set your own credentials for security

    NEO4J_URI=neo4j://database:7687
    NEO4J_USERNAME=neo4j
    NEO4J_PASSWORD=password
    OLLAMA_BASE_URL=http://llm-gpu:11434

### CPU powered

```bash
git clone https://github.com/docker/genai-stack
docker compose --profile linux up -d
mv env.example .env
nano .env
```
In the `.env` file, make sure following lines are commented out. Set your own credentials for security

    NEO4J_URI=neo4j://database:7687
    NEO4J_USERNAME=neo4j
    NEO4J_PASSWORD=password
    OLLAMA_BASE_URL=http://llm:11434

### Setting up on other platforms 

For instructions on how to set up the stack on other platforms, refer to [this page](https://github.com/docker/genai-stack). 


### Initial startup

The first startup may take some time because the system needs to download a large language model.

### Monitoring progress

We can monitor the download and initialization progress by viewing the logs. Run the following command to view the logs:

```bash
docker compose logs
```

Wait for specific lines in the logs indicating that the download is complete and the stack is ready. These lines typically confirm successful setup and initialization.

    pull-model-1 exited with code 0
    database-1    | 2024-12-29 09:35:53.269+0000 INFO  Started.
    pdf_bot-1     |   You can now view your Streamlit app in your browser.
    loader-1      |   You can now view your Streamlit app in your browser.
    bot-1         |   You can now view your Streamlit app in your browser.


    You can now access the interface at [http://localhost:8501/](http://localhost:8501/) to ask questions. For example, you can try the sample question:

When we see those lines in the logs, web apps are ready to be used.

Since our goal is to teach AI about things it does not yet know, we begin by asking it a simple question about Nifi at 
[http://localhost:8501/](http://localhost:8501/).

Title: GenAI Stack Setup and Initial Startup
Summary
This section details the commands for setting up the GenAI stack on a Linux host, specifying both GPU and CPU powered options. It instructs users to clone the repository, configure the .env file, and provides a link to instructions for other platforms. Additionally, it explains how to monitor the initial startup progress by viewing the Docker logs and provides example log lines indicating a successful setup. The section concludes by guiding users on how to access the interface and ask initial questions to the AI, using Apache NiFi as an example.