If there is one unchanging fact in the world of software development and application management, it is that nothing holds still. Everything — programming languages, the basics of design, expected/required capabilities, infrastructure at all levels — is changing constantly, and you cannot count on today’s up-to-the-minute solutions to be adequate for tomorrow’s needs.
But you still need to be able to deploy your application today, and have it work not just tomorrow, but six months, or a year or two years from now, if not longer. For Kubernetes-based applications, the need for basic functional continuity applies not just to application code, but also to Kubernetes and your overall container architecture. You cannot afford to have a dead-end strategy when it comes to Kubernetes.
What kind of dead-end strategies are we talking about? Let’s take a look in this article. We’ll start by discussing the types of dead ends you can face when using Kubernetes, especially those that can lock you into a certain toolchain or prevent flexibility. We’ll then highlight strategies for avoiding these pitfalls and keeping your Kubernetes strategy lean, mean and dynamic.
Kubernetes Lock-In: Container Management Platforms
For many enterprise-level IT operations, the prospect of being able to manage both containers and more traditionally structured (or even legacy) applications seamlessly in the cloud is tempting, particularly if it can be done with a minimum of refactoring. Why put in the time and effort required for a wholesale redesign of existing application architecture when you can have most of the benefits of cloud-based, container deployment without all that trouble?
And it is a tempting prospect. But typically, as do most easy-looking shortcuts to modernization, it comes at a price. And in the case of such wholesale lift-and-halfway-shift solutions, the hidden price (which is the one you have to watch out for) is that at some point, you may find that you are locked into highly proprietary container management technology, even when the platform vendors promise full integration with Kubernetes and other industry-standard open source resources.
What does this kind of a lock-in mean? For one thing, if you need to add capabilities that the platform doesn’t offer, you’re out of luck. You can look for third-party add-ons that will provide the features that you need, but if they aren’t being offered for that platform, you will have to wait until the platform’s developers add those features — if they ever do.
And that “if” is not a given. The company that provides the platform may not last, and even if the platform’s code is made up primarily of open source components, development may slow down or stop. And if key parts of the code are truly proprietary, there may effectively be no prospect for updates or upgrades.
Kubernetes Lock-In: Cloud Platforms
You can find yourself equally locked in if your container system is too dependent on proprietary management tools offered by a cloud provider (including implementations of Kubernetes which are specific to a single cloud platform). With cloud providers, the hidden cost of lock-in may not be as obvious, since the better-known providers are not likely to go out of business any time soon, and they are likely to maintain an up-to-date set of management tools.
But a cloud service provider may still deprecate (or discontinue entirely) specific management tools or even deployment platforms, and you may find that the upgrade path which the provider offers is not entirely easy.
And even if your container system fits comfortably into a platform which your cloud provider maintains and keeps current, that very comfort may be a trap, since it is likely to mean that you are making significant use of proprietary features. If that is the case, the cost (in time and effort) of reworking your container management infrastructure so that you can deploy it across multiple clouds may turn out to be prohibitively high.
Kubernetes Longevity Best Practices: Architecture
Now that we’ve discussed how you can end up in a dead-end Kubernetes strategy, let’s look at how to avoid it by building a Kubernetes environment, toolset and overall process that provides flexibility and longevity.
In many ways, container strategy best practices start at the level of architecture and design:
- Begin with a full refactoring plan for any existing monolithic applications that you plan to move into containers.
- The plan can (and probably should) proceed in stages, but the projected result should be a fully refactored application; this gives you a roadmap for planning future container deployments.
- The design should be fully agnostic in terms of cloud providers and specific container management platforms (other than recognizing Kubernetes as the basic industry standard).
- Assume that you will need to add new technologies and capabilities at some point, so make provisions for them.
Kubernetes Longevity Best Practices: Management and Orchestration
Beyond architecture, the key to avoiding a dead-end container strategy lies in your choice of container management and deployment systems:
- Your best protection against lock-in is to use an industry-standard, open source container management framework — which in practice means Kubernetes.
- As further protection against lock-in, it is best to stick with standard, unmodified, non-proprietary Kubernetes components.
- Choose a management system/control plane for Kubernetes which is fully cloud-agnostic and platform-agnostic.
- The control plane should allow Kubernetes to continue to function even if it is not available.
- The management system should allow you to expand into capabilities that you aren’t currently using.