Microservices and Kubernetes are two of the hottest topics in tech today. How can you use them together? Let’s take a look in this article, which explores how to use Kubernetes to make microservices architectures a reality.
SOA: A Quick History
In the early days of web application development, there were many hurdles. The biggest hurdles I recall trying to overcome were also rarely planned for, and included:
- The pains of deploying working software from a development system to production systems. This included web servers, application servers, databases, and anything else that invokes, “But it works on my machine!”
- Understanding all of the software dependencies (e.g. libraries, Java versions, OS patches, and so on).
- Integration with legacy systems.
The last point in particular inspired the adoption of service-oriented architectures (SOA), where older systems were accessed through a web interface. This included XML over HTML, and eventually REST services. Too often, however, these services were built in the same manner as the web applications that used them: as huge monolithic applications that required overcoming the very same hurdles listed above. All of this history helps explain where containers came from.
Containers and You
Containers are lightweight abstractions that help to package, deploy, and manage running software. They isolate components, define quotas on server resources, and help to communicate and understand software dependencies. And since containers operate closer to the application level, not the physical server level, they’re often used as part of a serverless implementation.
The efficiencies gained through all of these benefits makes containers quite useful, and as a result, they caught on quickly with developers. For instance, developers build software within a container, packaged with its dependencies, and later deploy to test and production servers with a single command. You can deploy and bring software components online quickly, and spin up new instances with lower overhead compared with virtualization. The efficiency and automation involved enables DevOps and cloud architecture very well, and puts control over the production environment into developers’ hands.
Microservices with Containers and DevOps
The goals of a microservice-based approach include:
- Enforcing a more modular software structure (reducing dependencies)
- Continuous delivery of isolated software updates (reducing the impact of change)
- Well-defined contracts (with interface-driven decoupling)
- Language, platform, and OS—real or virtual—agnostic
- Scalability and portability (quick instance spin up and down across servers)
Given these goals, containers as defined above fit naturally as a development model and delivery vehicle for microservices. Developers like them for their predictability—Just run a command, and a container with working software within it is deployed and spun up in production almost instantaneously. This allows developers (and operations) to work more quickly, efficiently, and with greater scale.
Another aspect of containers that appeals to a microservice architecture—They are extremely lightweight, running on top of (yet abstracted from) the OS at the kernel layer. They start much faster, use far fewer resources, and are more isolated and controllable than software installed at the OS level. Containers truly put the micro in service.
Happily Ever After with Kubernetes
Thanks to microservices, almost all of the hurdles are removed (not just jumped over). This is where Kubernetes comes in. The marriage of Kubernetes with containers for a microservices architecture makes a lot of sense, and clears the path for efficient software delivery.
As a container orchestration tool, Kubernetes helps to automate many aspects of microservices development, including:
- Container deployment
- Elasticity (scaling up and down to meet demand)
- The logical grouping of containers
- Management of containers and applications that use them
Kubernetes begins by helping organize containers into pods, grouping components to share resources. It helps to further abstract pods with labels to form a hierarchical grouping. Further, Kubernetes allows you to define services as groups of pods that work together as a single—well, service. Perhaps this is a stock quote service that serves up stock prices to requesting applications, a REST interface to an inventory database, or an online ecommerce implementation.
- Use the API server to implement REST interfaces.
- Take advantage of continuous deployment with kubectl, a build workflow tool, and a CI server.
- Implement API security with Kubernetes Ingress or an available API gateway.
- Use Kubernetes pods and services to group multiple containers, and limit the amount of running software components within a single container.
- Include Kubernetes in your testing, both of the container cluster itself, and also by including pods that include unit testing applications within them so you can test from everywhere.
- Avoid the use of the root user and password, or any user password, within containers. (Instead, use role-based security and access control, and any number of security-specific best practices.)
- Manage your container IP addresses and ports with Kubernetes Cluster Networking support.
- Use a DevOps approach and toolset specifically for Kubernetes-orchestrated containers.