Best Practices for an Effective Kubernetes Deployment

505 VIEWS

Kubernetes is the latest evolution in managing containers. As with any new and fast-moving technology, there are a seemingly endless number of ways people are deploying it. Kubernetes may be a single product with a focus on orchestration, but from the network and storage options to the machines it runs on, its ecosystem needs to be in place to truly experience the power it provides.

In this article, we walk through what it takes to deploy Kubernetes effectively today.

Life before Kubernetes and how it became the de facto standard

Kubernetes is barely five years old (released in June 2014), and has only been considered stable for about four years (v1.0 appeared in July 2015). Before the industry nearly universally agreed on moving to Kubernetes, there were multiple options available to orchestrate containers. Docker Enterprise (Swarm) had the name recognition of Docker behind it, and AWS ECS was the first widely available container solution on the public cloud. Other commercially- backed competitors included Mesosphere, and of course, Cloud Foundry.

As of today, all of these previously industry-leading solutions have announced that they are either replacing key components of their own offerings with Kubernetes or are offering Kubernetes management as an add-on. This puts pure-play Kubernetes offerings at a definite advantage, as they can focus solely on the future instead of having to worry about how to keep their previous customers happy while migrating to their new technology stacks.

Learning from the pain of early adopters

Two categories of early adopters are relevant to the Kubernetes conversation. The first is the early adopters of containers, and the second is the first wave of adopters who had no other option than to approach Kubernetes with a do-it-yourself mentality.

The group that was early to containers had to work through choosing which proprietary runtime they wanted to base their applications on — and then, which management platform they would use that supported containers. Docker Swarm was the closest to a true container orchestrator, but it lacked any concept of scale, as it made a simple pool of container hosts on which to run workloads. It has become more intelligent since then, but that is where it started. Mesosphere was another big player early in the container management space. Mesosphere centered on the entire datacenter and multiple workload types; containers were just one of the many workloads tackled, which resulted in a lack of focus.

The largest player early in the container market was Cloud Foundry. Cloud Foundry had by far the biggest mindshare in the enterprise space, and had a very solid focus on enabling developers through its platform-as-a-service offering. It was open-sourced, and multiple large and traditional IT vendors had distributions based on it. Underneath its developer-friendly exterior was a custom container orchestrator and format. Since the platform was about the developer experience, its container management lacked any features that would support spanning multiple regions, let alone multiple clouds.

The second group of early adopters saw the potential in what container orchestration technology from Google could do, and was spawned from lessons learned in Google’s infamous proprietary company-wide Borg management platform. In the early days, enterprises needed multiple capabilities, which turned into a sea of fragile infrastructures that were custom-tailored to an exact purpose. The difficulty of upgrading and the need for constant attention drove newer and better technologies into the Kubernetes ecosystem. The CNCF has done a great job helping Kubernetes build specifications around its edges to make integrating supplementary technology projects easier, with far less customization.

Key features to look for in a Kubernetes-based offering

Kubernetes has become the container orchestrator of choice, and a great number of supplementary technologies can be incorporated to make it truly enterprise-friendly. With the introduction of features like RBAC (authorization), CNI (networking), and CSI (storage), it seems easier to assemble the pieces yourself — but that barely scratches the surface of what makes a solid, reliable, and scalable Kubernetes deployment.

As open and extendible as Kubernetes is, the people who know how to piece it together are few, and they usually work for vendors which have the mission and resources to assemble distributions around Kubernetes that meet the needs of enterprises. While there will always be companies with the financial resources and technical capabilities to roll their own large-scale Kubernetes distributions (think Facebook and Amazon), most organizations will be better served leveraging a vendor-supplied distribution so they can focus on delivering real business value to their own organization.

Some of the key areas of focus for vendor-supported Kubernetes offerings are streamlined installation processes, patching, security, and scalability. Streamlined installation and scalability both revolve heavily around supporting multiple platforms. This can mean running clusters on everything from dedicated on-premises hardware to private clouds to one or more public clouds, (and often federating between several of them). Vendors that support the flexibility to easily extend to another cloud (or even migrate workloads completely) allow companies to easily support changing business requirements, like new privacy regulations which limit data movement.

Security integration is a key stumbling block to most new technology deployments, and a vendor-supported Kubernetes offering that integrates secrets management and SSO capabilities goes a long way towards being able to run production loads. Patching and upgrading are also crucial. In addition to providing enhancements, the ability to patch in place allows the latest security vulnerabilities to be resolved in short order, without having to rebuild or redeploy entire clusters. As more organizations want to leverage machine learning, AI, and other compute and data-intensive technologies like Blockchain, they need to have a container platform that is continually enhanced.

Conclusion

As you integrate Kubernetes into your infrastructure and use it as a base for the next generation of applications you are delivering, keep in mind that the value of Kubernetes and its ecosystem is how open and flexible it has been since it was released. Not only has Kubernetes adapted well to the huge increase in companies contributing to its development and support, it has designed and implemented a series of interfaces which allow it to focus on what it does well, and let things like the network layer be replaced with another CNI-compatible option without interfering with other components in the environment.

The key to having a truly future-proof strategy built on Kubernetes is to pick the right partner to work with you on your journey. The community is too big for any organization to keep track of without assigning dedicated resources to just that task alone (which is not the best use of your limited staff). The right partner will provide guidance on where the community is headed and support you through the adjustments that will be required to keep up with and leverage the latest advancements.


Vince Power is an Enterprise Architect at Medavie Blue Cross. His focus is on cloud adoption and technology planning in key areas like core computing (IaaS), identity and access management, application platforms (PaaS), and continuous delivery.


Discussion

Click on a tab to select how you'd like to leave your comment

Leave a Comment

Your email address will not be published. Required fields are marked *

Menu