Being part of the technical team at Resin.io has exposed me to a lot of new technologies and development practices. And one of those has been software containers. Containers make it easy to ship applications with a standard list of parts and instructions, and by bringing this approach to connected devices we greatly simplify the fleet management process. Docker has many features that make it suitable for IoT applications, but there’s one recent release that has us particularly excited: with Docker’s new multi-stage build capabilities, you can make your application images anywhere from 5 to 100 times smaller
Why is this important?
When updating connected Linux devices, every byte counts. Storage in IoT devices is generally limited, network bandwidth is often expensive, connectivity is intermittent, and you want your update to get to your devices as quickly as possible. For all these reasons, it pays to make the containers you ship to your devices as tiny as they can be.
With this in mind, we recently moved our image builder to Docker version 17.05. The builder, which converts Dockerfiles to container images, can now perform all the build steps for your application, keep the files you need for runtime, and discard anything that isn’t necessary. For the application developer, this means much smaller images, faster updates, lower bandwidth consumption, and more available space on your devices.
What’s not to love?
If you’re familiar with Dockerfiles, you’ll know that they generally start with a FROM statement that specifies a base image to build upon. This could be followed by any number of tasks that result in your final image, such as downloading and installing dependencies or specifying runtime configurations. With multi-stage builds, you have the option to use multiple FROM statements. Each FROM statement begins a new stage, and each stage after the first can take advantage of the work done in a previous stage, copying from it only what is necessary. This allows you to call upon all the tools you need for building without carrying them through to the final image.
Here are simple examples in the resin.io repo to show the immediate benefit this new feature has given our users. This project uses some Node.js modules that require a number of build tools, leading to a large image (434 MB). By using multi-stage builds, however, we’re able to copy the required modules into a runtime container after they are built, leaving behind everything not needed to run the application. This results in a final image that is just over 80 MB, much smaller than the original.
When you look at the Dockerfile, you’ll see two lines that are important for the multi-stage build:
FROM resin/raspberry-pi-alpine-node:6.11.1 AS buildstep
and farther down:
COPY --from=buildstep /usr/src/app/node_modules node_modules
The first line gives a name to the first stage (buildstep). This is an optional, but recommended, convention. If no name is given, the first stage can be referred to as 0. The COPY line is in the second stage, and it uses the name of the first stage and the full path of the desired files to specify what part of the first stage should be included in the final image.
If your application uses a compiled language, you get even more benefit from multi-stage builds. In this case, all your build tools and source code can be left behind, as you only need to copy the binary that is created in the first stage. We’ve created a simple web server example in Golang to demonstrate this. The base image used in the first stage is around 640 MB, and the final image comes in at just over 5 MB. Quite an improvement!
Because these images are built on our servers, you can get started with multi-stage builds without any OS upgrades or other work on your end. With the examples we shared, you can begin to see how you might optimize your production images to be as small as possible. Give it a try, and let us know how it goes!