Containers and Docker are here to stay. But for many, getting from a non-container environment to a container environment can be tricky. Moving to any new technology is usually challenging, and shifting for the entire DevOps chain is particularly hairy. The trick is to do it in bite-size chunks. This article walks through the challenges of deploying a container-based DevOps chain, including development, build environments, QA/Test environments, staging, and finally production.
featured image by evion http://evion.deviantart.com/art/Gaius-Centaur-431650016
Containers and the Cloud
One of the motivating factors for moving to containers is their portability. Build once, run everywhere. But what if your application depended on services in the cloud? That would typically imply cloud-only deployment. Containers fortunately allow for fairly loose coupling between components. So instead of a cloud service, you could run a container with an equivalent service while running on-premise. Some examples with backends are Amazon’s RDS in the AWS cloud with MySQL or MariaDB on-premise. Or S3 on AWS and OpenStack Swift with LeoFS or Minio on-premise. Most cloud services have equivalent on-premise applications. For services that don’t, emulating simplified versions in code might be feasible, too.
An application Definition
For the sake of this discussion, let’s look at a typical cloud application that’s either being developed or migrated to containers. This application is designed to run primarily in the cloud, so it might use some cloud services as well as the application services that span multiple machines. Let’s assume it’s a multi-tiered application that has a load balancer sitting in front of a few web application servers, which talk to back-end services like S3, RDS, memcache etc. Here we have a couple of containers that run the application logic — In this case, the web application containers. All other entities are either services in the cloud or applications running on one of the application machines.
The figure here shows the logical design of the application. A, B and C are containers that implement the application and the numbered boxes represent the cloud services they depend on.
Portability of containers isn’t necessarily a function of the container technology itself, but rather, of how you can keep the container state (if any) outside the container, and ensure that the application running inside the container sees the same environment (state here) no matter where it runs. One way to achieve this is to keep the inter-container configuration glue outside the container itself and construct this configuration differently for each environment the application needs to run in. That way the contents of the container itself stay unchanged — the same image running in all environments. Native Docker tools like Docker Machine, Swarm and Compose have matured sufficiently to be usable for development.
Docker Compose for example allows you to define an entire container stack and the container startup sequence, including container interconnectivity, all as part of a single configuration file. It also allows for injection of specific environment variables and directories into the container at launch time. You could potentially make this definition part of the source control itself. This allows for rapid development using local resources.
The figure here shows how the application could be structured on a local development machine using Docker Compose. Here, apart from the application containers, containers are spun up for service endpoints that we would use in the cloud otherwise.
Build, Automatic Sanity & QA
Containers offer two very real benefits to the build and automatic sanity process:
- Using container technologies like Docker to encapsulate the build environment for different components lets you specify the build process as a well-defined recipe that leverages Docker’s cached filesystem layers to speed up build times considerably. It also ensures that the build environment is completely declarative. Each component and its dependencies could be built, updated or upgraded independently.
- Using Docker Swarm and Docker Compose to spread the automatic sanity load across a cluster of on-prem machines, with minimal changes to the developer Docker Compose recipe. Or, if you already have an orchestration tool in place that supports the container technology, that’s probably the way to go.
The figure here shows how one could structure the application containers using a cluster of Docker machines on-premise, with the services endpoints being served by dedicated servers. Here we use Docker Swarm to spin up the application containers to the scale needed.
Staging & Production
At staging and production load, it’s time to replace on-premise commodity service components with real, scalable cloud services. Again if the glue that holds the components are managed by the orchestration tool, then this transition from on-premise to staging and production becomes trivial.
The figure here shows the application running at scale on the cloud. Scaling the application containers automatically to handle the load is the key.
Moving to a container-based environment might be work. But this is hardly an excuse for not doing it. It is possible to transition even the most entrenched environments into a modern delivery chain of containers. And leveraging a hybrid stack is a good way to do it.
Hybrid container stacks therefore offer a convenient way to transition the application from development through production while allowing it to run at the required scale at each point. The immutability of the application container as it transitions from a single container on the developer box to hundreds or thousands along the DevOps delivery pipeline ensures consistency for the application that’s difficult to achieve otherwise.
Pingback: Best Practices for Hybrid Cloud Management · Sweetcode.io