Home Explore Blog CI



kubernetes

9th chunk of `content/en/blog/_posts/2015-06-00-Cluster-Level-Logging-With-Kubernetes.md`
247c5c8e9987c432d830b94ec16a3ae2c4f829c67624b73e0000000100000cf6
root       479  0.0  0.0   4348   812 ?        S    00:05   0:00 sleep 1

root       480  0.0  0.0  15572  2212 ?        R    00:05   0:00 ps aux
```

What happens if for any reason the image in this pod is killed off and then restarted by Kubernetes? Will we still see the log lines from the previous invocation of the container followed by the log lines for the started container? Or will we lose the log lines from the original container’s execution and only see the log lines for the new container? Let’s find out. First let’s stop the currently running counter.

```
$ kubectl stop pod counter

pods/counter


Now let’s restart the counter.


$ kubectl create -f counter-pod.yaml

pods/counter
```

Let’s wait for the container to restart and get the log lines again.

```
$ kubectl logs counter

0: Tue Jun  2 21:51:40 UTC 2015

1: Tue Jun  2 21:51:41 UTC 2015

2: Tue Jun  2 21:51:42 UTC 2015

3: Tue Jun  2 21:51:43 UTC 2015

4: Tue Jun  2 21:51:44 UTC 2015

5: Tue Jun  2 21:51:45 UTC 2015

6: Tue Jun  2 21:51:46 UTC 2015

7: Tue Jun  2 21:51:47 UTC 2015

8: Tue Jun  2 21:51:48 UTC 2015

```
Oh no! We’ve lost the log lines from the first invocation of the container in this pod! Ideally, we want to preserve all the log lines from each invocation of each container in the pod. Furthermore, even if the pod is restarted we would still like to preserve all the log lines that were ever emitted by the containers in the pod. But don’t fear, this is the functionality provided by cluster level logging in Kubernetes. When a cluster is created, the standard output and standard error output of each container can be ingested using a [Fluentd](http://www.fluentd.org/) agent running on each node into either [Google Cloud Logging](https://cloud.google.com/logging/docs/) or into Elasticsearch and viewed with Kibana. This blog article focuses on Google Cloud Logging.



When a Kubernetes cluster is created with logging to Google Cloud Logging enabled, the system creates a pod called fluentd-cloud-logging on each node of the cluster to collect Docker container logs. These pods were shown at the start of this blog article in the response to the first get pods command.


This log collection pod has a specification which looks something like this [fluentd-gcp.yaml](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml):

```
apiVersion: v1

kind: Pod

metadata:

  name: fluentd-cloud-logging

spec:

  containers:

  - name: fluentd-cloud-logging

    image: gcr.io/google\_containers/fluentd-gcp:1.6

    env:

    - name: FLUENTD\_ARGS

      value: -qq

    volumeMounts:

    - name: containers

      mountPath: /var/lib/docker/containers

  volumes:

  - name: containers

    hostPath:

      path: /var/lib/docker/containers
  ```

This pod specification maps the directory on the host containing the Docker log files, /var/lib/docker/containers, to a directory inside the container which has the same path. The pod runs one image, gcr.io/google\_containers/fluentd-gcp:1.6, which is configured to collect the Docker log files from the logs directory and ingest them into Google Cloud Logging. One instance of this pod runs on each node of the cluster. Kubernetes will notice if this pod fails and automatically restart it.

Title: Cluster-Level Logging in Kubernetes with Fluentd and Google Cloud Logging
Summary
The section highlights the issue of losing container logs upon restarts and introduces cluster-level logging as a solution. It explains how Kubernetes, when configured with logging enabled, uses Fluentd to collect logs from each node's Docker containers and ingest them into Google Cloud Logging (or Elasticsearch with Kibana). The configuration of the fluentd-cloud-logging pod, which runs on each node to collect these logs, is described.