This works, but I have a few problems with it. First, authentication isn’t configured in a default Jenkins installation. The leader is sitting on the public Internet, accessible to anyone, until you connect and configure authentication. And since there’s no encryption, configuring authentication is kind of a symbolic gesture. We need SSL, and we need it now!
### Do what you know
For a few milliseconds I considered trying to get SSL working directly on Jenkins. I’d never done it before, and I caught myself wondering if it would be as straightforward as working with SSL on [Nginx](http://nginx.org/), something I do have experience with. I’m all for learning new things, but this seemed like a great place to not invent a new wheel: SSL on Nginx is straightforward and well documented (as are its reverse-proxy capabilities), and Kubernetes is all about building functionality by orchestrating and composing containers. Let’s use Nginx, and add a few bonus features that Nginx makes simple: HTTP-\>HTTPS redirection, and basic access authentication.
### SSL termination proxy as an nginx service
I started by putting together a [Dockerfile](https://github.com/GoogleCloudPlatform/nginx-ssl-proxy/blob/master/Dockerfile) that inherited from the standard nginx image, copied a few Nginx config files, and added a custom entrypoint (start.sh). The entrypoint script checks an environment variable (ENABLE\_SSL) and activates the correct Nginx config accordingly (meaning that unencrypted HTTP reverse proxy is possible, but that defeats the purpose). The script also configures basic access authentication if it’s enabled (the ENABLE\_BASIC\_AUTH env var).
Finally, start.sh evaluates the SERVICE\_HOST\_ENV\_NAME and SERVICE\_PORT\_ENV\_NAME env vars. These variables should be set to the names of the environment variables for the Kubernetes service you want to proxy to. In this example, the service for our Jenkins leader is cleverly named jenkins, which means pods in the cluster will see an environment variable named JENKINS\_SERVICE\_HOST and JENKINS\_SERVICE\_PORT\_UI (the port that 8080 is mapped to on the Jenkins leader). SERVICE\_HOST\_ENV\_NAME and SERVICE\_PORT\_ENV\_NAME simply reference the correct service to use for a particular scenario, allowing the image to be used generically across deployments.
### Defining the Controller and Service
LIke every other pod in this example, we’ll deploy Nginx with a replication controller, allowing us to scale out or in, and recover automatically from container failures. This excerpt from a[complete descriptor in the sample app](https://github.com/GoogleCloudPlatform/kube-jenkins-imager/blob/master/ssl_proxy.yaml#L20-L48) shows some relevant bits of the pod spec:
```
spec:
containers:
-
name: "nginx-ssl-proxy"
image: "gcr.io/cloud-solutions-images/nginx-ssl-proxy:latest"
env:
-
name: "SERVICE\_HOST\_ENV\_NAME"
value: "JENKINS\_SERVICE\_HOST"
-
name: "SERVICE\_PORT\_ENV\_NAME"
value: "JENKINS\_SERVICE\_PORT\_UI"
-
name: "ENABLE\_SSL"
value: "true"
-
name: "ENABLE\_BASIC\_AUTH"
value: "true"
ports:
-
name: "nginx-ssl-proxy-http"
containerPort: 80
-
name: "nginx-ssl-proxy-https"
containerPort: 443
```
The pod will have a service exposing TCP 80 and 443 to a public load balancer. Here’s the service descriptor [(also available in the sample app](https://github.com/GoogleCloudPlatform/kube-jenkins-imager/blob/master/service_ssl_proxy.yaml)):
```
kind: "Service"
apiVersion: "v1"
metadata:
name: "nginx-ssl-proxy"
labels:
name: "nginx"
role: "ssl-proxy"
spec:
ports:
-
name: "https"
port: 443
targetPort: "nginx-ssl-proxy-https"
protocol: "TCP"
-