Home Explore Blog CI



kubernetes

1st chunk of `content/en/blog/_posts/2016-08-00-Challenges-Remotely-Managed-Onpremise-Kubernetes-Cluster.md`
2765978e1f8dcf4b8a4543d542ba95151d694a1bfaa505780000000100000da8
---
title: " Challenges of a Remotely Managed, On-Premises, Bare-Metal Kubernetes Cluster "
date: 2016-08-02
slug: challenges-remotely-managed-onpremise-kubernetes-cluster
url: /blog/2016/08/Challenges-Remotely-Managed-Onpremise-Kubernetes-Cluster
author: >
  Bich Le (Platform9)
---

**Introduction**  

The recently announced [Platform9 Managed Kubernetes](https://platform9.com/press/platform9-makes-easy-deploy-docker-containers-production-scale/) (PMK) is an on-premises enterprise Kubernetes solution with an unusual twist: while clusters run on a user’s internal hardware, their provisioning, monitoring, troubleshooting and overall life cycle is managed remotely from the Platform9 SaaS application. While users love the intuitive experience and ease of use of this deployment model, this approach poses interesting technical challenges. In this article, we will first describe the motivation and deployment architecture of PMK, and then present an overview of the technical challenges we faced and how our engineering team addressed them.  

**Multi-OS bootstrap model**  

Like its predecessor, [Managed OpenStack](https://platform9.com/products/kvm/), PMK aims to make it as easy as possible for an enterprise customer to deploy and operate a “private cloud”, which, in the current context, means one or more Kubernetes clusters. To accommodate customers who standardize on a specific Linux distro, our installation process uses a “bare OS” or “bring your own OS” model, which means that an administrator deploys PMK to existing Linux nodes by installing a simple RPM or Deb package on their favorite OS (Ubuntu-14, CentOS-7, or RHEL-7). The package, which the administrator downloads from their Platform9 SaaS portal, starts an agent which is preconfigured with all the information and credentials needed to securely connect to and register itself with the customer’s Platform9 SaaS controller running on the WAN.  

**Node management**  

The first challenge was configuring Kubernetes nodes in the absence of a bare-metal cloud API and SSH access into nodes. We solved it using the _node pool_ concept and configuration management techniques. Every node running the agent automatically shows up in the SaaS portal, which allows the user to _authorize_ the node for use with Kubernetes. A newly authorized node automatically enters a _node pool_, indicating that it is available but not used in any clusters. Independently, the administrator can create one or more Kubernetes clusters, which start out empty. At any later time, he or she can request one or more nodes to be attached to any cluster. PMK fulfills the request by transferring the specified number of nodes from the pool to the cluster. When a node is authorized, its agent becomes a configuration management agent, polling for instructions from a CM server running in the SaaS application and capable of downloading and configuring software.  

Cluster creation and node attach/detach operations are exposed to administrators via a REST API, a CLI utility named _qb_, and the SaaS-based Web UI. The following screenshot shows the Web UI displaying one 3-node cluster named clus100, one empty cluster clus101, and the three nodes.  



 ![clusters_and_containervisors_view.png](https://lh3.googleusercontent.com/Tn67P9fhhPqCNF6xYl6mfVehG8AtLcLOM0NMW3YukBkWB5cSpYofkLQo1vrqsZiDBON05GC4ZQwWgEV9YBdoNA6Hzy_loS0cvT3BzkxmLesk6UsX_xugsrGppJD-Mc8fjHIF2QrU)

Title: Challenges of Remotely Managing On-Premises Bare-Metal Kubernetes Clusters
Summary
Platform9 Managed Kubernetes (PMK) offers on-premises Kubernetes management via a SaaS application. This approach presents technical challenges, including multi-OS bootstrap, node configuration without bare-metal cloud APIs or SSH access, and node management. PMK uses a node pool concept and configuration management techniques to solve these issues. Authorized nodes enter a pool and can be attached to Kubernetes clusters via a REST API, CLI utility, or Web UI.