](https://3.bp.blogspot.com/-0B1BEQo_fWc/VaVxVUBkf3I/AAAAAAAAAVE/5yCCnA29C88/s1600/0%2B%25281%2529.png)
Now, how did the Nginx pods get ahold of the super-secret SSL key/cert and htpasswd file (for basic access auth)?
### Keep it secret, keep it safe
Kubernetes has an [API and resource for Secrets](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/secrets.md). Secrets “are intended to hold sensitive information, such as passwords, OAuth tokens, and ssh keys. Putting this information in a secret is safer and more flexible than putting it verbatim in a pod definition or in a docker image.”
You can create secrets in your cluster in 3 simple steps:
1.
Base64-encode your secret data (i.e., SSL key pair or htpasswd file)
```
$ cat ssl.key | base64
LS0tLS1CRUdJTiBDRVJUS...
```
1.
Create a json document describing your secret, and add the base64-encoded values:
```
apiVersion: "v1"
kind: "Secret"
metadata:
name: "ssl-proxy-secret"
namespace: "default"
data:
proxycert: "LS0tLS1CRUd..."
proxykey: "LS0tLS1CR..."
htpasswd: "ZXZhb..."
```
1.
Create the secrets resource:
```
$ kubectl create -f secrets.json
```
To access the secrets from a container, specify them as a volume mount in your pod spec. Here’s the relevant excerpt from the [Nginx proxy template](https://github.com/GoogleCloudPlatform/kube-jenkins-imager/blob/master/ssl_proxy.yaml###L41-L48) we saw earlier:
```
spec:
containers:
-
name: "nginx-ssl-proxy"
image: "gcr.io/cloud-solutions-images/nginx-ssl-proxy:latest"
env: [...]
ports: ...[]
volumeMounts:
-
name: "secrets"
mountPath: "/etc/secrets"
readOnly: true
volumes:
-
name: "secrets"
secret:
secretName: "ssl-proxy-secret"
```
A volume of type secret that points to the ssl-proxy-secret secret resource is defined, and then mounted into /etc/secrets in the container. The secrets spec in the earlier example defined data.proxycert, data.proxykey, and data.htpasswd, so we would see those files appear (base64-decoded) in /etc/secrets/proxycert, /etc/secrets/proxykey, and /etc/secrets/htpasswd for the Nginx process to access.
All together now
I have “containers and Kubernetes are fun and cool!” moments all the time, like probably every day. I’m beginning to have “containers and Kubernetes are extremely useful and powerful and are adding value to what I do by helping me do important things with ease” more frequently. This SSL termination proxy with Nginx example is definitely one of the latter. I didn’t have to waste time learning a new way to use SSL. I was able to solve my problem using well-known tools, in a reusable way, and quickly (from idea to working took about 2 hours).
Check out the complete [Automated Image Builds with Jenkins, Packer, and Kubernetes](https://github.com/GoogleCloudPlatform/kube-jenkins-imager) repo to see how the SSL termination proxy is used in a real cluster, or dig into the details of the proxy image in the [nginx-ssl-proxy repo](https://github.com/GoogleCloudPlatform/nginx-ssl-proxy) (complete with a Dockerfile and Packer template so you can build the image yourself).