Home Explore Blog CI



kubernetes

2nd chunk of `content/en/blog/_posts/2015-07-00-Weekly-Kubernetes-Community-Hangout_23.md`
d486b6ae5180b8d6a8ad9a50b66d6f3d045a1945f4da569d0000000100000c6f
    - If you make one change in a config file, ansible re-runs everything (which isn’t always desirable)
    - Uses a jinja2 template
  - Create machines with minimal software, then use ansible to get that machine into a runnable state

    - Sets up all of the add-ons
  - Eliminates the provisioner shell scripts
  - Full cluster setup currently takes about 6 minutes

    - CentOS with some packages
    - Redeploy to the cluster takes 25 seconds
  - Questions for Eric

    - Where does the provider-specific configuration go?

      - The only network setup that the ansible config does is flannel; you can turn it off
    - What about init vs. systemd?

      - Should be able to support in the code w/o any trouble (not yet implemented)
  - Discussion

    - Why not push the setup work into containers or kubernetes config?

      - To bootstrap a cluster drop a kubelet and a manifest
    - Running a kubelet and configuring the network should be the only things required. We can cut a machine image that is preconfigured minus the data package (certs, etc)

      - The ansible scripts install kubelet & docker if they aren’t already installed
    - Each OS (RedHat, Debian, Ubuntu) could have a different image. We could view this as part of the build process instead of the install process.
    - There needs to be solution for bare metal as well.
    - In favor of the overall goal -- reducing the special configuration in the salt configuration
    - Everything except the kubelet should run inside a container (eventually the kubelet should as well)

      - Running in a container doesn’t cut down on the complexity that we currently have
      - But it does more clearly define the interface about what the code expects
    - These tools (Chef, Puppet, Ansible) conflate binary distribution with configuration

      - Containers more clearly separate these problems
    - The mesos deployment is not completely automated yet, but the mesos deployment is completely different: kubelets get put on top on an existing mesos cluster

      - The bash scripts allow the mesos devs to see what each cloud provider is doing and re-use the relevant bits
      - There was a large reverse engineering curve, but the bash is at least readable as opposed to the salt
    - Openstack uses a different deployment as well
    - We need a well documented list of steps (e.g. create certs) that are necessary to stand up a cluster

      - This would allow us to compare across cloud providers
      - We should reduce the number of steps as much as possible
      - Ansible has 241 steps to launch a cluster
- 1.0 Code freeze

  - How are we getting out of code freeze?
  - This is a topic for next week, but the preview is that we will move slowly rather than totally opening the firehose

    - We want to clear the backlog as fast as possible while maintaining stability both on HEAD and on the 1.0 branch
    - The backlog of almost 300 PRs but there are also various parallel feature branches that have been developed during the freeze
  - Cutting a cherry pick release today (1.0.1) that fixes a few issues
  - Next week we will discuss the cadence for patch releases

Title: Kubernetes Community Hangout Notes - July 17, 2015: Ansible Discussion & 1.0 Code Freeze
Summary
The discussion continues around using Ansible for Kubernetes cluster setup, covering topics such as the location of provider-specific configurations, init systems, and potential integration with containers. There's a focus on reducing special configurations and creating a well-documented list of steps for cluster setup across different cloud providers. A separate topic covers the 1.0 code freeze and the plan to gradually lift it while maintaining stability, including the cutting of a cherry-pick release (1.0.1) and a discussion on patch release cadence in the following week.