4th chunk of `content/en/blog/_posts/2016-10-00-Globally-Distributed-Services-Kubernetes-Cluster-Federation.md`
2e99be6fb46eefddd747e4200cccfd9b1d3acdac0edeba970000000100000fab
1. 1.creates matching Kubernetes Ingress objects in every cluster underlying your Cluster Federation
2. 2.ensures that all of these in-cluster ingress objects share the same logical global L7 (i.e. HTTP(S)) load balancer and IP address
3. 3.monitors the health and capacity of the service “shards” (i.e. your Pods) behind this ingress in each cluster
4. 4.ensures that all client connections are routed to an appropriate healthy backend service endpoint at all times, even in the event of Pod, cluster, availability zone or regional outages
We can verify the ingress objects are matching in the underlying clusters. Notice the ingress IP addresses for all 9 clusters is the same.
```
$ for c in $(kubectl config view -o jsonpath='{.contexts[*].name}'); do kubectl --context=$c get ingress; done
NAME HOSTS ADDRESS PORTS AGE
nginx \* 80 1h
NAME HOSTS ADDRESS PORTS AGE
nginx \* 130.211.40.xxx 80 40m
NAME HOSTS ADDRESS PORTS AGE
nginx \* 130.211.40.xxx 80 1h
NAME HOSTS ADDRESS PORTS AGE
nginx \* 130.211.40.xxx 80 26m
NAME HOSTS ADDRESS PORTS AGE
nginx \* 130.211.40.xxx 80 1h
NAME HOSTS ADDRESS PORTS AGE
nginx \* 130.211.40.xxx 80 25m
NAME HOSTS ADDRESS PORTS AGE
nginx \* 130.211.40.xxx 80 38m
NAME HOSTS ADDRESS PORTS AGE
nginx \* 130.211.40.xxx 80 3m
NAME HOSTS ADDRESS PORTS AGE
nginx \* 130.211.40.xxx 80 57m
NAME HOSTS ADDRESS PORTS AGE
nginx \* 130.211.40.xxx 80 56m
```
Note that in the case of Google Cloud Platform, the logical L7 load balancer is not a single physical device (which would present both a single point of failure, and a single global network routing choke point), but rather a [truly global, highly available load balancing managed service](https://cloud.google.com/load-balancing/), globally reachable via a single, static IP address.
Clients inside your federated Kubernetes clusters (i.e. Pods) will be automatically routed to the cluster-local shard of the Federated Service backing the Ingress in their cluster if it exists and is healthy, or the closest healthy shard in a different cluster if it does not. Note that this involves a network trip to the HTTP(S) load balancer, which resides outside your local Kubernetes cluster but inside the same GCP region.
The next step is to schedule the service backends. Let’s first create the ConfigMap in each cluster in the Federation.
We do this by submitting the ConfigMap to each cluster in the Federation.
```
$ for c in $(kubectl config view -o jsonpath='{.contexts[\*].name}'); do kubectl --context=$c create -f configmaps/zonefetch.yaml; done
```
Let’s have a quick peek at our Replica Set:
```
$ cat replicasets/nginx-rs.yaml
apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
name: nginx
labels:
app: nginx
type: demo
spec:
replicas: 9
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: frontend
ports:
- containerPort: 80
volumeMounts:
- name: html-dir
mountPath: /usr/share/nginx/html
- image: busybox
name: zone-fetcher
command:
- "/bin/sh"
- "-c"
- "/zonefetch/zonefetch.sh"
volumeMounts:
- name: zone-fetch
mountPath: /zonefetch
- name: html-dir
mountPath: /usr/share/nginx/html
volumes:
- name: zone-fetch
configMap:
defaultMode: 0777
name: zone-fetch