It’s one thing to launch a Kubernetes cluster and deploy a Hello World app on it. It’s quite another to evolve that cluster into a production-ready state, and ensure that it is ready to meet complex needs surrounding storage, networking, monitoring and more.
Yet making that transition from testing to production has become the focus for an increasingly large number of organizations across a range of verticals. From retailers, to telcos, to media companies and beyond, businesses that began kicking Kubernetes’s tires over the past few years are now done with the evaluation period, and are ready to move to production-level deployments.
To provide guidance along that journey, Platform9 has begun a webinar series titled “Enterprise Action Plan: Moving to Production with Kubernetes.” By drawing on the expertise of the Platform9 team, which interacts every day with organizations that are at different steps along the journey to production Kubernetes, as well as industry partners who offer deep insight into running Kubernetes within specific verticals, we’ll be offering eight webinars over the next several months that discuss how to launch and manage fully operational, production-quality clusters.
The first webinar in that series, Kicking off your Kubernetes Implementation Project Successfully: Key Considerations, which was moderated by Platform9 Head of Enterprise marketing Kamesh Pemmaraju, took place on Sept. 16. The webinar provided an overview of what the Kubernetes journey to production entails, as well as a look at the state of the Kubernetes market and how different companies are pursuing the Kubernetes journey today.
Understanding the Kubernetes Journey
Peter Fray, chief technologist at Platform9, kicked off the webinar by providing an overview of what the Kubernetes journey looks like. Fray explained that many organizations initially mis-judge the complexity of running Kubernetes because they focus just on Kubernetes itself, rather than the large and complicated ecosystem of which it is a part.
Thus, teams typically start their Kubernetes journey simply enough by launching a cluster. There is some research required here: Should you run your cluster in the public cloud or on your own infrastructure? Should you host it on bare metal or virtual machines? How should you configure Kubernetes networking? These questions require some research, but for most teams, it’s easy enough to find answers and get a cluster up and running.
Teams can then use that cluster to launch some basic apps that don’t have complex networking or storage needs. At this point, many organizations feel pretty happy about their Kubernetes process.
But as Fray explained, trouble starts to set in once they get further along in the journey. When it comes time to launch more complex apps, teams may realize that the networking and storage architectures they originally implemented for their clusters are not a good fit. So they go back and redesign their architectures.
After that, they need to figure out how to monitor their apps. This is one of the junctures at which they realize that Kubernetes is not just a singular platform, but a complex ecosystem. There are myriad monitoring tools available – some open source, others commercial – as well as different ways to go about integrating them into a cluster.
Logging presents a similar conundrum. Teams that want to get Kubernetes into production need to figure out which log management tools to use, which logs to focus on and which architecture to use to get log data out of their clusters and into log analytics platforms.
Once you’ve finally figured out all of this, you need to think about your deployment process, too. Manual deployment may have worked in the earlier stages of the Kubernetes journey. But to get to full production, you need CI/CD. For many teams, solving this puzzle means deciding whether to build an entirely new CI/CD pipeline to fit their Kubernetes cluster, or adapt an existing pipeline for it.
And then there are service meshes, which represent an ecosystem unto themselves. Which solution is best: Istio, Linkerd or something else? And how do you configure your service mesh to enable features like A/B deployment?
Security and compliance, too, are another distinct challenge that needs to be addressed before you can run Kubernetes in production; so is planning for backups, upgrades and downtime.
In short, Fray said, the journey to production Kubernetes is much longer and more complex than it often appears at first glance. Although Fray didn’t talk in detail about how to handle each step in this webinar, future webinars will offer deeper dives into the different steps of the Kubernetes journey, such as managing networking, storage and logging.
The State of Kubernetes in the Market
Fray handed the virtual microphone to Sirish Raghuram, co-founder and CEO of Platform9. Raghuram provided an overview of the Kubernetes market in order to help companies understand how the Kubernetes journey can vary between different verticals.
“One of the challenges with a technology as broad and as widely adopted as Kubernetes,” he said, “is that it can be hard to remember that there is an array of different use cases. Companies that use Kubernetes as part of private cloud architectures, for instance, have very different goals than those that are running edge workloads. For private cloud users, achieving low-cost Kubernetes deployments may be a key goal, while organizations that run Kubernetes on the edge are focused on high infrastructure scalability and low network latency”.
The takeaway from Raghuram is that there is no one-size-fits-all approach to the Kubernetes journey. You need to think about what your organization’s end-goals are in using Kubernetes, and what you want to prioritize. This insight will guide you in making decisions about each of the steps along the journey – from implementing storage to establishing a security strategy and beyond.
5G Telcos and Kubernetes
To contextualize the points made by Fray and Raghuram, the third part of the webinar consisted of an overview of how companies in the telco industry are pursuing their Kubernetes journeys. Suresh Somasundaram, head of 5G platform and cloud engineering at Mavenir and one of the world’s leading experts on the convergence between Kubernetes and the telco industry, provided this insight.
Somasundaram explained that there are two reasons why telcos have become so interested in Kubernetes in recent years. One is the need for ultra-low latency on their networks, which requires high-performing, highly scalable and highly reliable infrastructure. The second factor is a desire for greater flexibility and performance in network functions deployment, which telcos can achieve by containerizing their network functions instead of running them in resource-heavy virtual machines.
Beyond this, Somasundaram said, Kubernetes also provides telcos with a “level playing field” that they can use to deploy all of their workloads. Given that the 5G architectures that telcos are now deploying are loosely coupled and highly distributed by nature, being able to use a consistent platform for deploying and managing workloads helps to keep 5G networks manageable.
The result is that, by putting Kubernetes into production, telcos are now able to operate with the flexibility and scalability of the cloud, while still retaining the high levels of reliability that are critical in the telco vertical. “What we are seeing is a unification of sorts between the cloud and the telco worlds” via Kubernetes, Somasundaram said.
The audience had the opportunity to ask questions during the final portion of the webinar.
One key question that arose was how telcos can secure Kubernetes, even in the very large-scale, distributed environments that are typical in the industry. Somasundaram suggested a two-fold approach: First, security checks should be built into the CI/CD pipeline through processes like scanning container images and validating deployments before they reach Kubernetes. Second, teams must establish strong security policies to govern how their Kubernetes architecture is set up and maintained.
Somasundaram also answered a question about whether telcos should consider using serverless functions (which can also be hosted on Kubernetes using extensions like Knative) in addition to containers. He said that serverless functions may make sense for always-on workloads where every millisecond counts, but are probably overkill for less critical facets of telco workloads.
An audience member also asked whether it makes more sense to use a managed Kubernetes service or take the do-it-yourself route if you want to get into production. Fray responded that “a lot of people start with the do-it-yourself path, but as soon as they start getting down the path and realizing that Kubernetes is not ‘just Kubernetes’ but is an entire ecosystem, it gets big and vast very quickly.” The talent required to manage the complexity of Kubernetes is hard to find and retain, he said. So is dealing with the outages that will occur in production environments.
Fray suggested that, in the face of these challenges, it’s best to have a managed service provider on your side who can help you navigate the complexities of the Kubernetes ecosystem, as well as solve pressing issues quickly. Doing it yourself may suffice if you’re just getting started with Kubernetes, but it’s a risk when you’ve committed production workloads to the platform.
For the full context of this discussion, view the recorded Sept. 16 webinar anytime. You can also sign up to attend future webinars about getting Kubernetes into production in order to gain deeper insight into the points outlined on September 16.