Home Explore Blog CI



docker

4th chunk of `content/guides/genai-video-bot/index.md`
88030ed2e777eefd208f0b310e2b6d9fa823c245d2902bd70000000100000548


   In this example, the Dockerbot answers the question and
   provides links to the video with timestamps, which may contain more
   information about the answer.

   The dockerbot service takes the question, turns it into an embedding using
   the text-embedding-3-small model, queries the Pinecone database to find
   similar embeddings, and then passes that context into the gpt-4-turbo-preview
   to generate an answer.

3. Select the first link to see what information it provides. Based on the
   previous example, select
   [https://www.youtube.com/watch?v=yaQZFhrW0fU&t=553s](https://www.youtube.com/watch?v=yaQZFhrW0fU&t=553s).

   In the example link, you can see that the section of video perfectly answers
   the question, "What is a sugar cookie?".

## Explore the application architecture

The following image shows the application's high-level service architecture, which includes:

- yt-whisper: A local service, ran by Docker Compose, that interacts with the
  remote OpenAI and Pinecone services.
- dockerbot: A local service, ran by Docker Compose, that interacts with the
  remote OpenAI and Pinecone services.
- OpenAI: A remote third-party service.
- Pinecone: A remote third-party service.


Title: Dockerbot Functionality and Application Architecture
Summary
This section explains how the dockerbot service uses embeddings, Pinecone database queries, and the GPT-4 Turbo model to answer questions. The architecture consists of yt-whisper and dockerbot local services, which interact with remote OpenAI and Pinecone services. Yt-whisper downloads audio, transcribes it, computes embeddings, and saves them into Pinecone, while dockerbot computes question embeddings, completes chats with the question and provided transcriptions, retrieves relevant transcriptions, and provides links to specific video timestamps to offer more context.