root 479 0.0 0.0 4348 812 ? S 00:05 0:00 sleep 1
root 480 0.0 0.0 15572 2212 ? R 00:05 0:00 ps aux
```
What happens if for any reason the image in this pod is killed off and then restarted by Kubernetes? Will we still see the log lines from the previous invocation of the container followed by the log lines for the started container? Or will we lose the log lines from the original container’s execution and only see the log lines for the new container? Let’s find out. First let’s stop the currently running counter.
```
$ kubectl stop pod counter
pods/counter
Now let’s restart the counter.
$ kubectl create -f counter-pod.yaml
pods/counter
```
Let’s wait for the container to restart and get the log lines again.
```
$ kubectl logs counter
0: Tue Jun 2 21:51:40 UTC 2015
1: Tue Jun 2 21:51:41 UTC 2015
2: Tue Jun 2 21:51:42 UTC 2015
3: Tue Jun 2 21:51:43 UTC 2015
4: Tue Jun 2 21:51:44 UTC 2015
5: Tue Jun 2 21:51:45 UTC 2015
6: Tue Jun 2 21:51:46 UTC 2015
7: Tue Jun 2 21:51:47 UTC 2015
8: Tue Jun 2 21:51:48 UTC 2015
```
Oh no! We’ve lost the log lines from the first invocation of the container in this pod! Ideally, we want to preserve all the log lines from each invocation of each container in the pod. Furthermore, even if the pod is restarted we would still like to preserve all the log lines that were ever emitted by the containers in the pod. But don’t fear, this is the functionality provided by cluster level logging in Kubernetes. When a cluster is created, the standard output and standard error output of each container can be ingested using a [Fluentd](http://www.fluentd.org/) agent running on each node into either [Google Cloud Logging](https://cloud.google.com/logging/docs/) or into Elasticsearch and viewed with Kibana. This blog article focuses on Google Cloud Logging.
When a Kubernetes cluster is created with logging to Google Cloud Logging enabled, the system creates a pod called fluentd-cloud-logging on each node of the cluster to collect Docker container logs. These pods were shown at the start of this blog article in the response to the first get pods command.
This log collection pod has a specification which looks something like this [fluentd-gcp.yaml](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml):
```
apiVersion: v1
kind: Pod
metadata:
name: fluentd-cloud-logging
spec:
containers:
- name: fluentd-cloud-logging
image: gcr.io/google\_containers/fluentd-gcp:1.6
env:
- name: FLUENTD\_ARGS
value: -qq
volumeMounts:
- name: containers
mountPath: /var/lib/docker/containers
volumes:
- name: containers
hostPath:
path: /var/lib/docker/containers
```
This pod specification maps the directory on the host containing the Docker log files, /var/lib/docker/containers, to a directory inside the container which has the same path. The pod runs one image, gcr.io/google\_containers/fluentd-gcp:1.6, which is configured to collect the Docker log files from the logs directory and ingest them into Google Cloud Logging. One instance of this pod runs on each node of the cluster. Kubernetes will notice if this pod fails and automatically restart it.