We can click on the Logs item under the Monitoring section of the Google Developer Console and select the logs for the counter container, which will be called kubernetes.counter\_default\_count. This identifies the name of the pod (counter), the namespace (default) and the name of the container (count) for which the log collection occurred. Using this name we can select just the logs for our counter container from the drop down menu:
_(image-counter-new-logs.png)_
When we view the logs in the Developer Console we observe the logs for both invocations of the container.
_(image-screenshot-2015-06-02)_
Note the first container counted to 108 and then it was terminated. When the next container image restarted the counting process resumed from 0. Similarly if we deleted the pod and restarted it we would capture the logs for all instances of the containers in the pod whenever the pod was running.
Logs ingested into Google Cloud Logging may be exported to various other destinations including [Google Cloud Storage](https://cloud.google.com/storage/) buckets and [BigQuery](https://cloud.google.com/bigquery/). Use the Exports tab in the Cloud Logging console to specify where logs should be streamed to (or follow this link to the [settings tab](https://pantheon.corp.google.com/project/_/logs/settings)).
We could query the ingested logs from BigQuery using the SQL query which reports the counter log lines showing the newest lines first.
SELECT metadata.timestamp, structPayload.log FROM [mylogs.kubernetes\_counter\_default\_count\_20150611] ORDER BY metadata.timestamp DESC
Here is some sample output:
**_(image-bigquery-log-new.png)_**
We could also fetch the logs from Google Cloud Storage buckets to our desktop or laptop and then search them locally. The following command fetches logs for the counter pod running in a cluster which is itself in a GCE project called myproject. Only logs for the date 2015-06-11 are fetched.
```
$ gsutil -m cp -r gs://myproject/kubernetes.counter\_default\_count/2015/06/11 .
```
Now we can run queries over the ingested logs. The example below uses the[jq](http://stedolan.github.io/jq/) program to extract just the log lines.
```
$ cat 21\:00\:00\_21\:59\:59\_S0.json | jq '.structPayload.log'
"0: Thu Jun 11 21:39:38 UTC 2015\n"
"1: Thu Jun 11 21:39:39 UTC 2015\n"
"2: Thu Jun 11 21:39:40 UTC 2015\n"
"3: Thu Jun 11 21:39:41 UTC 2015\n"
"4: Thu Jun 11 21:39:42 UTC 2015\n"
"5: Thu Jun 11 21:39:43 UTC 2015\n"
"6: Thu Jun 11 21:39:44 UTC 2015\n"
"7: Thu Jun 11 21:39:45 UTC 2015\n"
```
This article has touched briefly on the underlying mechanisms that support gathering cluster level logs on a Kubernetes deployment. The approach here only works for gathering the standard output and standard error output of the processes running in the pod’s containers. To gather other logs that are stored in files one can use a sidecar container to gather the required files as described at the page [Collecting log files within containers with Fluentd and sending them to the Google Cloud Logging service](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/contrib/logging/fluentd-sidecar-gcp).