Home Explore Blog CI



docker

3rd chunk of `content/guides/kafka.md`
1baff5d9f1bdc3e82dff13aedf150cf74b6a881635d6f1290000000100000894
Now that you have an application connecting to Kafka through its exposed port, it’s time to explore what changes are needed to connect to Kafka from another container. To do so, you will now run the application out of a container instead of natively.

But before you do that, it’s important to understand how Kafka listeners work and how those listeners help clients connect.

### Understanding Kafka listeners

When a client connects to a Kafka cluster, it actually connects to a “broker”. While brokers have many roles, one of them is to support load balancing of clients. When a client connects, the broker returns a set of connection URLs the client should then use for the client to connect for the producing or consuming of messages. How are these connection URLs configured?

Each Kafka instance has a set of listeners and advertised listeners. The “listeners” are what Kafka binds to and the “advertised listeners” configure how clients should connect to the cluster. The connection URLs a client receives is based on which listener a client connects to.

### Defining the listeners

To help this make sense, let’s look at how Kafka needs to be configured to support two connection opportunities:

1. Host connections (those coming through the host’s mapped port) - these will need to connect using localhost
2. Docker connections (those coming from inside the Docker networks) - these can not connect using localhost, but the network alias (or DNS address) of the Kafka service

Since there are two different methods clients need to connect, two different listeners are required - `HOST` and `DOCKER`. The `HOST` listener will tell clients to connect using localhost:9092, while the `DOCKER` listener will inform clients to connect using `kafka:9093`. Notice this means Kafka is listening on both ports 9092 and 9093. But, only the host listener needs to be exposed to the host.



In order to set this up, the `compose.yaml` for Kafka needs some additional configuration. Once you start overriding some of the defaults, you also need to specify a few other options in order for KRaft mode to work.

Title: Configuring Kafka Listeners for Host and Docker Connections
Summary
This section explains how to configure Kafka listeners to support connections from both the host machine and Docker containers. It introduces the concept of defining two listeners, `HOST` and `DOCKER`, to handle connections via `localhost:9092` and the Docker network alias `kafka:9093`, respectively. It emphasizes that only the host listener needs to be exposed to the host. The section includes a diagram illustrating the connection paths for non-containerized and containerized applications. It then mentions the need for additional configuration in the `compose.yaml` file to support these listeners and KRaft mode.