Docker vs. Kubernetes

375 VIEWS

· ·

“Docker vs. Kubernetes.” That’s a phrase you hear frequently these days. Unfortunately, its meaning is more difficult to unpack than may first appear, largely because “Docker” can refer to multiple things (a container runtime, an orchestrator associated with that runtime, a company, and more).

Keep reading for everything you’ve ever wanted to know about how Docker relates to Kubernetes. Are they competing products? Do they work together? What’s the difference? How are they similar? We’ll answer these questions and beyond.

Speaking Loosely

We all use words loosely sometimes, and for the most part, it doesn’t cause serious problems, because everyone involved in the conversation understands at least approximately what they’re supposed to mean.

When it comes to technology, however, it is important to keep track of the more precise meaning of key terms, even if we continue to use them in a much more loose manner.

And so it is with Docker and Kubernetes. It’s easy (and not at all uncommon) to speak of them as if they were in competition with each other, when the truth is, well, much more complex.

Docker Basics

Let’s start with Docker.

Docker didn’t invent containers. Before Docker, there was, for example, LXC, a container-deployment system running at the level of the Linux kernel. LXC made it possible to run application code in a stripped-down, virtualized environment which included the system resources required to run the application, but which was isolated from the operating system by the use of Linux namespaces.

Like other early container-style methods of virtualization running at the level of the Linux OS, however, LXC offered a relatively limited set of API commands, and included only basic features for managing containers.

Docker to the Rescue

When Docker came along, it filled the need for a much more full-featured container-management system. It provides a framework for creating, managing, and deploying individual containers, or containers in relatively small groups. Docker includes a container-management engine, a full API, a Linux CLI plus desktop development environments for Windows and Mac OS, and Docker Hub, the largest library of container images currently available.

Docker Makes Itself Necessary

Docker has become the de facto standard method of creating, deploying, and managing container images, and as such, it has succeeded in making itself virtually indispensable. If you’re working with containers, you are very likely to be using Docker, and you probably see Docker as the natural environment for developing container images.

But what about scale? If you think of containers as individual software components (using the term very loosely) wrapped up in a layer (or two) of portable infrastructure, an obvious question is, “How many instances do I need to deploy at any given time? One? A handful? Five thousand?”

For most containerized applications (particularly those which serve a large user base on the Internet, as opposed to a small local deployment), the number of instances required at any given time may be very high, and the demands on container-based resources in general and on the container infrastructure itself can be extreme.

How do you manage containers at scale? How do you meet rapidly shifting demands for resources, and how do you efficiently balance loads so that those demands can be met? And what about service discovery and management of groups of containers across servers?

Conducting the Orchestra

You need to be able to manage containers, not as individual applications or components, but as a highly dynamic and rapidly changing swarm, with the kind of flexible and highly responsive intelligence that a swarm requires.

This is where container management software moves up to the next level — container orchestration.

And what is container orchestration? It’s the process of managing containers at scale, under the often very demanding conditions faced by real-world applications. In the traditional, relatively small-scale world of desktop and local-network-based software, many of the problems addressed by orchestration are either nominal in scale (deployment of individual services) or are built into the application code at a basic level (service discovery), or both.

With container-based applications, the equivalent issues must be addressed at the level of infrastructure (rather than application code), and often at a scale which can place a great deal of stress on both application and infrastructure resources. It is these needs which container orchestration software addresses.

Docker Swarm

There are a number of container orchestration systems currently available — in fact, Docker offers its own orchestration tool, which it calls swarm mode, or more commonly, Docker Swarm. Swarm turns Docker hosts into managers, making it possible to run multiple hosts together to control a swarm of containers.

It can provide both replicated services (multiple instances of a container-based service working together for scaling and load-balancing) and global services (distributed instances, providing service availability at each node), along with relatively fine-grained management of service configuration at scale. And the truth is that Docker Swarm and Kubernetes can be compared, because they serve similar functions.

Swarm vs Kubernetes

So, you might wonder, if Docker offers Docker Swarm, why would anyone want to use another orchestration tool? As is often the case in software development, the answer is rather far from simple, but the bottom line is that it is Kubernetes, rather than Docker Swarm, which has come to dominate the orchestration tool market. In part, this is because Kubernetes works with other container engines besides Docker, and because more open source development has gone into Kubernetes-related tools and resources.

However you rate the comparative virtues of Docker Swarm and Kubernetes, the fact remains that at this point, Docker (in both its open source and Enterprise versions) not only offers but heavily emphasizes full integration with Kubernetes.

Kubernetes Basics

What’s so good about Kubernetes?

It handles virtually all of the tasks that are required for container orchestration, and it is designed for rapid, automatic scaling, based on demand and other factors. It is able to manage containers at all scales very effectively because it thoroughly abstracts the container infrastructure, placing containers in an environment which is entirely under the control of Kubernetes.

Abstracting the Container World

In Kubernetes, the most fundamental level of abstraction is the pod, which consists of a group of containers representing an application or identifiable service, with a single IP address, shared namespaces, and shared volumes. Pods are abstracted into services, which typically represent groups of pods performing the same function. When an application requests the functionality provided by a given service, the service takes care of routing traffic to individual pods.

Kubernetes nodes handle the management tasks associated with pods and services, and the nodes are in turn managed at the cluster level.

What Kubernetes Can Do

This allows Kubernetes to handle key container orchestration functions, such as:

  • Management of basic container deployment and provisioning tasks.
  • Service discovery, using pod/service architecture and node-level container management.
  • Rapid scaling, based on replication or destruction of pods as required.
  • Extremely high availability of services, as a result of both service discovery and scaling.
  • Cross-platform operation, with containers deployed in highly heterogeneous environments.
  • Strong support for CI/CD, with an infrastructure that allows uninterrupted operation during rollouts.
  • Built-in monitoring of container health, along with the appropriate responses.
  • Load-balancing features, both built-in and as added services.

Working Together

So the truth is, it’s not (and never really was) Docker vs. Kubernetes. It’s Docker AND Kubernetes — and together, they have come to form a rock-solid foundation for large-scale container deployment.


Michael Churchman is a Fixate IO contributor. He started as a scriptwriter, editor, and producer during the anything-goes early years of the game industry. He spent much of the ‘90s in the high-pressure bundled software industry, where the move from waterfall to faster release was well under way, and near-continuous release cycles and automated deployment were already de facto standards.


Discussion

Click on a tab to select how you'd like to leave your comment

Leave a Comment

Your email address will not be published. Required fields are marked *

Menu