github.com/google/kubeflow/tree/master/kubeflow
ks pkg install kubeflow/core
ks pkg install kubeflow/tf-serving
ks pkg install kubeflow/tf-job
ks generate core kubeflow-core --name=kubeflow-core
```
We can now define [environments](https://ksonnet.io/docs/concepts#environment) corresponding to our two clusters.
```
kubectl config use-context minikube
ks env add minikube
kubectl config use-context gke
ks env add gke
```
And we’re done! Now just create the environments on your cluster. First, on minikube:
```
ks apply minikube -c kubeflow-core
```
And to create it on our multi-node GKE cluster for quicker training:
```
ks apply gke -c kubeflow-core
```
By making it easy to deploy the same rich ML stack everywhere, the drift and rewriting between these environments is kept to a minimum.
To access either deployments, you can execute the following command:
```
kubectl port-forward tf-hub-0 8100:8000
```
and then open up http://127.0.0.1:8100 to access JupyterHub. To change the environment used by kubectl, use either of these commands:
```
# To access minikube
kubectl config use-context minikube
# To access GKE
kubectl config use-context gke
```
When you execute apply you are launching on K8s
- JupyterHub for launching and managing Jupyter notebooks on K8s
- A [TF CRD](https://github.com/tensorflow/k8s)
Let's suppose you want to submit a training job. Kubeflow provides ksonnet [prototypes](https://ksonnet.io/docs/concepts#prototype) that make it easy to define [components](https://ksonnet.io/docs/concepts#component). The tf-job prototype makes it easy to create a job for your code but for this example, we'll use the tf-cnn prototype which runs [TensorFlow's CNN benchmark](https://github.com/tensorflow/benchmarks/tree/master/scripts/tf_cnn_benchmarks).
To submit a training job, you first generate a new job from a prototype:
```
ks generate tf-cnn cnn --name=cnn
```
By default the tf-cnn prototype uses 1 worker and no GPUs which is perfect for our minikube cluster so we can just submit it.
```
ks apply minikube -c cnn
```
On GKE, we’ll want to tweak the prototype to take advantage of the multiple nodes and GPUs. First, let’s list all the parameters available:
```
# To see a list of parameters
ks prototype list tf-job
```
Now let’s adjust the parameters to take advantage of GPUs and access to multiple nodes.
```
ks param set --env=gke cnn num\_gpus 1
ks param set --env=gke cnn num\_workers 1
ks apply gke -c cnn
```
Note how we set those parameters so they are used only when you deploy to GKE. Your minikube parameters are unchanged!
After training, you [export your model](https://www.tensorflow.org/serving/serving_basic) to a serving location.
Kubeflow also includes a serving package as well.
To deploy a the trained model for serving, execute the following:
```
ks generate tf-serving inception --name=inception
---namespace=default --model\_path=gs://$bucket_name/$model_loc
ks apply gke -c inception