Home Explore Blog CI



docker

content/guides/rag-ollama/_index.md
5622754bb5ebe9e58a549efb3bc5a00475df61f16f25bcf80000000300000375
---
description: Containerize RAG application using Ollama and Docker
keywords: python, generative ai, genai, llm, ollama, rag, qdrant
title: Build a RAG application using Ollama and Docker
linkTitle: RAG Ollama application
summary: |
  This guide demonstrates how to use Docker to deploy Retrieval-Augmented
  Generation (RAG) models with Ollama.
tags: [ai]
aliases:
  - /guides/use-case/rag-ollama/
params:
  time: 20 minutes
---

The Retrieval Augmented Generation (RAG) guide teaches you how to containerize an existing RAG application using Docker. The example application is a RAG that acts like a sommelier, giving you the best pairings between wines and food. In this guide, you’ll learn how to:

- Containerize and run a RAG application
- Set up a local environment to run the complete RAG stack locally for development

Start by containerizing an existing RAG application.

Chunks
0d4fa8cd (1st chunk of `content/guides/rag-ollama/_index.md`)
Title: Introduction to Containerizing a RAG Application with Ollama and Docker
Summary
This guide explains how to containerize a Retrieval-Augmented Generation (RAG) application using Docker, specifically one that acts like a sommelier, recommending wine and food pairings. It covers containerizing the RAG application and setting up a local development environment to run the RAG stack.