---
title: " Kubernetes in the Enterprise with Fujitsu’s Cloud Load Control "
date: 2016-03-11
slug: kubernetes-in-enterprise-with-fujitsus
url: /blog/2016/03/Kubernetes-In-Enterprise-With-Fujitsus
author: >
Florian Walker (FUJITSU)
---
Earlier this year, Fujitsu released its Kubernetes-based offering Fujitsu ServerView[Cloud Load Control](http://www.fujitsu.com/software/clc/) (CLC) to the public. Some might be surprised since Fujitsu’s reputation is not necessarily related to software development, but rather to hardware manufacturing and IT services. As a long-time member of the Linux foundation and founding member of the Open Container Initiative and the Cloud Native Computing Foundation, Fujitsu does not only build software, but is committed to open source software, and contributes to several projects, including Kubernetes. But we not only believe in Kubernetes as an open source project, we also chose it as the core of our offering, because it provides the best balance of feature set, resource requirements and complexity to run distributed applications at scale.
Today, we want to take you on a short tour explaining the background of our offering, why we think Kubernetes is the right fit for your customers and what value Cloud Load Control provides on top of it.
**A long long time ago…**
In mid 2014 we looked at the challenges enterprises are facing in the context of digitization, where traditional enterprises experience that more and more competitors from the IT sector are pushing into the core of their markets. A big part of Fujitsu’s customers are such traditional businesses, so we considered how we could help them and came up with three basic principles:
- Decouple applications from infrastructure - Focus on where the value for the customer is: the application.
- Decompose applications - Build applications from smaller, loosely coupled parts. Enable reconfiguration of those parts depending on the needs of the business. Also encourage innovation by low-cost experiments.
- Automate everything - Fight the increasing complexity of the first two points by introducing a high degree of automation.
We found that Linux containers themselves cover the first point and touch the second. But at this time there was little support for creating distributed applications and running them managed automatically. We found Kubernetes as the missing piece.
**Not a free lunch**
The general approach of Kubernetes in managing containerized workload is convincing, but as we looked at it with the eyes of customers, we realized that it’s not a free lunch. Many customers are medium-sized companies whose core business is often bound to strict data protection regulations. The top three requirements we identified are:
- On-premise deployments (with the option for hybrid scenarios)
- Efficient operations as part of a (much) bigger IT infrastructure
- Enterprise-grade support, potentially on global scale
We created Cloud Load Control with these requirements in mind. It is basically a distribution of Kubernetes targeted for on-premise use, primarily focusing on operational aspects of container infrastructure. We are committed to work with the community, and contribute all relevant changes and extensions upstream to the Kubernetes project.
**On-premise deployments**
As Kubernetes core developer Tim Hockin often puts it in his[talks](https://speakerdeck.com/thockin), Kubernetes is "a story with two parts" where setting up a Kubernetes cluster is not the easy part and often challenging due to variations in infrastructure. This is in particular true when it comes to production-ready deployments of Kubernetes. In the public cloud space, a customer could choose a service like Google Container Engine (GKE) to do this job. Since customers have less options on-premise, often they have to consider the deployment by themselves.
Cloud Load Control addresses these issues. It enables customers to reliably and readily provision a production grade Kubernetes clusters on their own infrastructure, with the following benefits: