kubernetes

How Kubernetes is Sparking Innovation in the Docker Ecosystem

392 VIEWS

Clustered servers are not a new idea, but Kubernetes has made clustered computing feasible for the average IT organization. It has also managed to create an industry around its platform.

Indeed, Kubernetes is opening up an entire horizon of opportunities not just for the people using it, but for vendors in the container ecosystem as well. In fact, vendors are in a gold rush to release integrations and hosted versions of Kubernetes. Not a week goes by without a major vendor announcing their newly hatched Kubernetes roadmap.

Below, I discuss some of the ways in which Kubernetes is reshaping the Docker ecosystem.

Separating apps from infrastructure

One of the goals of DevOps is to give developers freedom from infrastructure anxiety and more time to work on their applications. Kubernetes makes this possible by allowing you to manage nodes (physical machines or VMs) in clusters that don’t require physical resources that need to be allocated to them. Kubernetes also makes it possible to run multi-cloud setups with, for example, one cluster running on AWS, and another on Azure. This is complicated with traditional VMs or physical servers, as things like auto scaling, load balancing and remote storage are unique to each IaaS provider. However, Kubernetes’ compatibility across all platforms makes multi-cloud setups not only possible, but highly scalable as well.

Sam Ghods, co-founder of Box, recently stated during his CloudNativeCon keynote in November that Kubernetes “abstracts away a messy problem so you can build on top of it.” He added, “The amount of innovation and leverage that’s going to come from being able to standardize on Kubernetes as a platform is incredibly exciting, more exciting than anything I’ve seen in the last 10 years of working on the cloud.”

The rush to provide managed Kubernetes services

Kubernetes may be the most powerful orchestration platform with the largest backing ever, but it comes with a steep learning curve to use it effectively. Another drawback is the lack of documentation, or rather, outdated documentation. Kubernetes is easily the fastest-growing open source project in the enterprise, and sometimes it’s hard for documentation and external tutorials to keep up with updates and features. These two drawbacks have created a gold mine in the Docker ecosystem with startups and established companies alike rushing to provide services built around Kubernetes.

From the recently founded Heptio to established players like CoreOS and Weave, there is no shortage of commercial offerings wrapped around Kubernetes. Microsoft’s recent acquisition of Deis left a lot of people guessing, until they figured out that its main projects Workflow, Helm, and Steward are all about making Kubernetes management easy.

Another interesting development is that a lot of OpenStack vendors are looking to jump ship and build their products around Kubernetes instead. One such vendor is OpenStack veteran Mirantis that announced the end-of-life for its Mirantis Openstack solution in September 2019. It will be replaced by Mirantis Cloud Platform, featuring Kubernetes front and center.

Keeping Docker on its heels with Docker Swarm

This year’s DockerCon saw Docker take a major U-turn compared to their efforts last year to shove Swarm down everyone’s throats. The lacklustre response to their Swarm push probably made it clear Docker might just have missed the container orchestration train after all. In fact, Docker’s own efforts to distinguish their open source code from their commercial offerings by calling up the former Moby Project looks like a desperate move to keep up with Kubernetes’ growth. The story of Swarm has gone from ‘batteries included’ to ‘batteries included, but swappable.’ This is a good thing for the container ecosystem (which doesn’t want to be locked in by Docker).

Meanwhile, case studies abound

An interesting shift to Kubernetes, in this case from Openstack, comes from one of China’s largest e-commerce companies, JD.com. In a recent Kubernetes blog post, the Infrastructure Platform Development team at JD.com shared the reasons behind their shift. One of the main reasons mentioned is bottlenecks in allocation pipelines that often required up to a week for resource allocation to applications. This is before 2014, when JD.com was using physical servers, and the entire experience was marred with inflexibility and waste of resources.

Their initial container venture, called JDOS 1.0, was on Openstack and the Nova Docker driver. But when their clusters grew from 5,000 to 150,000 containers by November 2016, they figured it was time to make some changes.

JDOS 2.0 followed with the objective of separating the application and infrastructure layers by deploying a DevOps stack on Kubernetes that included GitLab, Jenkins, Logstash, Harbor, Elasticsearch and Prometheus. JD.com has also developed its own solution called Cane that communicates between Kubernetes and OpenStack.

PaddlePaddle is another interesting success story for Kubernetes. PaddlePaddle is the open source deep learning framework from Baidu. Its recent compatibility with Kubernetes’ cluster management system means it can now be deployed anywhere Kubernetes can run. The fact that Kubernetes can scale on-demand makes it an ideal platform for PaddlePaddle. Yi Wang, Tech Lead at PaddlePaddle, wrote in an e-mail, “Many potential clients, especially those in traditional industries, are interested in running deep learning on their own on-premises cluster.” Though what we know about deep learning so far tells us that it requires major CPU and memory resources to perform well, Kubernetes is now making it possible to deploy deep learning applications on relatively low-end hardware.
From its use as a tool alongside Docker to serving as the platform around which entire infrastructures are being built, Kubernetes has come a long way, and continues to grow. With the new v1.6 release that essentially frees it from being exclusively used with Docker, users can now choose from a number of container engines or even create their own custom ones (*cough* Moby Project *cough*). It’s difficult to make predictions when new factors come into play on a daily basis, but with the OpenStack crowd coming on board, for which Kubernetes is the light at the end of the tunnel, things are definitely going to be heating up in this sector.

Do you think you can beat this Sweet post?

If so, you may have what it takes to become a Sweetcode contributor... Learn More.

Twain began his career at Google, where, among other things, he was involved in technical support for the AdWords team.Today, as a technology journalist he helps IT magazines, and startups change the way teams build and ship applications.


Discussion

Click on a tab to select how you'd like to leave your comment

Leave a Comment

Your email address will not be published. Required fields are marked *