The Kubernetes project has been growing as an astonishing rate. In well under two years of existence it has already had 15,000+ commits from over 400 contributors. The inaugural KubeCon 2015 had around 500 attendees, nearly twice as many as the first MesosCon and on a par with the first DockerCon back in 2014. What is behind all this interest and activity?
For starters, it is directly descended from Borg, the software system that was so instrumental in powering Google to be the most dominant player on the web. As Cade Metz (wired.com) puts it: “Google’s system provides a central brain for controlling tasks across the company’s data centers. Rather than building a separate cluster of servers for each software system — one for Google Search, one for Gmail, one for Google Maps, etc. — Google can erect a cluster that does several different types of work at the same time.”
According to John Wilkes (Principal Software Engineer at Google), Borg is so efficient at parcelling work across their colossal fleet of nodes that it “probably saved Google the cost of building an extra data center.”
In July 2015 Google donated Kubernetes to the Cloud Native Computing Foundation, and thereby ceded control over it. Ben Kepes, writing in October 2015, remarked that “a number of the developers working on Kubernetes were formerly developing Borg. Kubernetes, therefore, can be seen as a broadly applicable version of Borg, without any of the Google-specific bits.”
He goes on to say that “Kubernetes was only released this year, but its promise of delivering a powerful and massively scalable system for managing containerized applications captured the Zeitgeist of the moment, and the project has gained lots of momentum.”
Moreover, Kubernetes is feature-rich. As Craig McLuckie, Product Manager at Google, summarized:
App Services, Network, Storage
- Includes core functionality critical for deploying and managing workloads in production, including DNS, load balancing, scaling, application-level health checking, and service accounts
- Stateful application support with a wide variety of local and network based volumes, such as Google Compute Engine persistent disk, AWS Elastic Block Store, and NFS
- Deploy your containers in pods, a grouping of closely related containers, which allow for easy updates and rollback
- Inspect and debug your application with command execution, port forwarding, log collection, and resource monitoring via CLI and UI
Cluster Management
- Upgrade and dynamically scale a live cluster
- Partition a cluster via namespaces for deeper control over resources. For example, you can segment a cluster into different applications, or test and production environments
Performance and Stability
- Fast API responses, with containers scheduled < 5s on average
- Scale tested to 1000s of containers per cluster, and 100s of nodes
- A stable API with a formal deprecation policy
Having said all that, the fact remains that to successfully launch and publish a secure and robust Kubernetes cluster requires a significant investment in time and effort. Unless, that is, you have the tools required to automate the entire process. Flexiant Concerto’s Kubernetes Orchestration as a Service feature gives you just that. To try it out, sign up here and enter promo code FLEXK8S for 10 free VMs. Once signed up, simply go to Settings -> Account and click to enable beta features.