Software engineers are some of the most innovative professionals in the IT industry. But there is a world of difference between inventing a technology and building a product that is designed to be used everyday by the average IT administrator. Nowhere is that contrast as sharply evident than the case of the open source Kubernetes framework for orchestrating containers, developed by Google.
While orchestrating containers is emerging as a crucial requirement at a time when usage of containers by the developer community is escalating at unprecedented rates, Kubernetes was initially created to address the needs of Google engineers trying to address one particular use case, at a time when Google was one of only a handful of organizations using containers. Since then, not only have containers evolved, but alternative orchestration frameworks are also now much more sophisticated.
The primary issue most organizations have with Kubernetes is that once an application is built using, for example, Docker containers, it has to be refactored to work with the Kubernetes application programming interfaces (APIs). In contrast, an orchestration framework such as Swarm makes use of native Docker APIs. For the developer, a framework that makes use of native Docker APIs eliminates the need to refactor an application altogether, which from a DevOps perspective results in one less level of management complexity.
In addition, the actual performance of Kubernetes still leaves something to be desired. Docker Inc. recently published a set of test results run by an independent contractor that shows Swarm scales more efficiently than Kubernetes. Based on a comparison of both frameworks running across a 1,000 node cluster running 30,000 containers, the tests showed that Swarm is on average five times faster than Kubernetes in terms of container startup time and seven times faster in delivering operational insights.
This matters to the developer community because container startup times wreak havoc on distributed applications that need near real-time responsiveness. Even in cases where real-time responsiveness isn’t all-critical, the extra time required to bring up infrastructure creates management issues for the IT operations teams that are already challenged with keeping up with containers that come and go in a matter of seconds.
Naturally, the folks that run Kubernetes take issue with both the way tests were conducted and what they may actually mean. But they do concede the simple truth that Kubernetes was designed to address the needs of complex IT environments, like those in web-scale companies such as Google.
Obviously, there are a whole range of coding issues that contribute to such a significant difference in Kubernetes and Swarm performance. The thing to remember is that Kubernetes was originally developed in a cloud computing environment that has almost unlimited access to IT infrastructure resources. In contrast, the average IT organization is confronted with the need to maximize IT infrastructure utilization rates each and every day. The less overhead the orchestration framework generates, the more computing horsepower there is available for the applications that share access to the same cluster. None of that may seem like a major issue when a container application is first deployed. But as the application environment as a whole starts to scale later on, it most certainly will become a major issue.
In the meantime, developers and IT operations teams alike would be well advised to keep in mind where, when and how any particular container orchestration framework was created, because decisions made years ago are still likely to have a major impact on overall experience today.