4. Create a sample topic and produce (or publish) a few messages by running the following command:
```console
$ docker exec -ti kafka /opt/kafka/bin/kafka-console-producer.sh --bootstrap-server :9092 --topic demo
```
After running, you can enter a message per line. For example, enter a few messages, one per line. A few examples might be:
```plaintext
First message
```
And
```plaintext
Second message
```
Press `enter` to send the last message and then press ctrl+c when you’re done. The messages will be published to Kafka.
5. Confirm the messages were published into the cluster by consuming the messages:
```console
$ docker exec -ti kafka /opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server :9092 --topic demo --from-beginning
```
You should then see your messages in the output:
```plaintext
First message
Second message
```
If you want, you can open another terminal and publish more messages and see them appear in the consumer.
When you’re done, hit ctrl+c to stop consuming messages.
You have a locally running Kafka cluster and have validated you can connect to it.
## Connecting to Kafka from a non-containerized app
Now that you’ve shown you can connect to the Kafka instance from a command line, it’s time to connect to the cluster from an application. In this example, you will use a simple Node project that uses the [KafkaJS](https://github.com/tulios/kafkajs) library.
Since the cluster is running locally and is exposed at port 9092, the app can connect to the cluster at localhost:9092 (since it’s running natively and not in a container right now). Once connected, this sample app will log messages it consumes from the `demo` topic. Additionally, when it runs in development mode, it will also create the topic if it isn’t found.
1. If you don’t have the Kafka cluster running from the previous step, run the following command to start a Kafka instance:
```console
$ docker run -d --name=kafka -p 9092:9092 apache/kafka
```
2. Clone the [GitHub repository](https://github.com/dockersamples/kafka-development-node) locally.
```console
$ git clone https://github.com/dockersamples/kafka-development-node.git
```
3. Navigate into the project.
```console
cd kafka-development-node/app
```
4. Install the dependencies using yarn.
```console
$ yarn install
```
5. Start the application using `yarn dev`. This will set the `NODE_ENV` environment variable to `development` and use `nodemon` to watch for file changes.
```console
$ yarn dev
```
6. With the application now running, it will log received messages to the console. In a new terminal, publish a few messages using the following command:
```console
$ docker exec -ti kafka /opt/kafka/bin/kafka-console-producer.sh --bootstrap-server :9092 --topic demo
```
And then send a message to the cluster:
```plaintext
Test message
```
Remember to press `ctrl+c` when you’re done to stop producing messages.
## Connecting to Kafka from both containers and native apps
Now that you have an application connecting to Kafka through its exposed port, it’s time to explore what changes are needed to connect to Kafka from another container. To do so, you will now run the application out of a container instead of natively.
But before you do that, it’s important to understand how Kafka listeners work and how those listeners help clients connect.
### Understanding Kafka listeners
When a client connects to a Kafka cluster, it actually connects to a “broker”. While brokers have many roles, one of them is to support load balancing of clients. When a client connects, the broker returns a set of connection URLs the client should then use for the client to connect for the producing or consuming of messages. How are these connection URLs configured?
Each Kafka instance has a set of listeners and advertised listeners. The “listeners” are what Kafka binds to and the “advertised listeners” configure how clients should connect to the cluster. The connection URLs a client receives is based on which listener a client connects to.