Container images make application deployment easy and convenient. But alongside ease and convenience, you also need security. This is why container image security should be a priority when you migrate to Docker.
Docker’s popularity is due largely to the fact that, with containers, anyone can package code and dependencies into an image and publish easily to a registry. From there, anyone can download the image and run containers from the image. This has brought portability of code across teams, and quickened the application lifecycle.
Yet, containers can inadvertently expose vulnerabilities if you don’t take the required security measures. This is especially true when working with container images that are shared between users and organizations.
So, let’s discuss how to handle container images in a secure way.
Verify the source of images
Container images are downloaded from registries like Docker Hub or third-party registries like Quay. These registries host container images from organizations and individuals alike. There are official repositories from most IT vendors, and many unauthorized ones as well. Across the application lifecycle, developers, QA and IT will download many images for different needs. It’s important to monitor these images, and perform checks before they are installed.
To do this, you can enable Docker Content Trust, which integrates with third-party registries to verify digital signatures for container images downloaded from them. This helps you whitelist official repositories from authorized, trusted sources.
If you need to work with unverified images from partners and vendors, for example, you could consider upgrading your image scanning to more robust container security tools like Twistlock. It not only scans images, but also lets you set up custom alerts whenever anyone attempts to install a suspicious image.
Implement robust access controls
The source of container images and the way users use official images can leave images compromised. This is why access control is extremely important for container images.
By default, all users are assigned root privileges inside a container. However, this is not good practice from a security point of view. Whenever a user is created, you need to change their access level to non-root. Certain users may genuinely need root access to complete certain tasks, but these exceptions should be made only within those containers that perform the task, and only as long as is necessary. This task-centric access control ensures that even if one user account is compromised, attackers can’t inflict much damage on the rest of the system.
It’s not possible to manually change the status of users for every container every time. This task needs to be automated. A platform like Twistlock enables role-based access control (RBAC) for images, and lets you assign privileges to users based on their job function. You can configure RBAC based on complex rules and ensure that all users have the necessary privileges to carry out their tasks—no less, and no more.
Keep containers lightweight
Developers are attracted to containers because of how much lighter they are compared to virtual machines (VMs). When running containers, it’s possible to load too many packages on a container so that it becomes bloated to more than 100 MB. The ideal container size should be just tens of MBs.
When selecting an OS for your image’s base layer, look for a minimalist option. There are a couple of good options like BusyBox, Alpine Linux, and RancherOS. Additionally, install only the packages that are required for a container to perform its task. This improves the performance of containers, and importantly, reduces the attack surface area.
Keep images healthy
Once you’ve followed all best practices to set up your images the right way, it’s important to monitor their health during runtime. This requires routine “healthchecks” on the containers. If Docker Engine finds containers that aren’t working, it can automatically replace them. This way you can keep the system healthy even if individual containers are found to be vulnerable.
An important practice to ensure the good health of your containers is to keep container images updated with the latest version and apply security patches to them frequently. You need to be able to scan the images during runtime to find vulnerabilities, and patch them promptly.
Detecting vulnerabilities is not an easy task, as your system could run tens of thousands of containers. At this scale, you require a threat detection tool like Twistlock that monitors container runtime with the help of machine learning algorithms. It is able to spot concerning patterns and alert you on their impact. This job of finding the needle in the haystack is not possible through manual poring over of log data and metrics—It takes intelligent algorithms and a modern threat detection platform like Twistlock.
Handle confidential data with care
Despite assigning read-only access to users, you still need to watch what data you store in your containers. For example, you should never store secrets like passwords, tokens, keys, and confidential user information inside dockerfiles. Even if deleted later, this data can be retrieved from the image’s history.
Instead, you should use the secrets management feature that comes with both Kubernetesand Docker Swarm. Each has strong defaults to ensure secrets are properly encrypted, stored in an encrypted format, and when retrieved, can be decrypted only by authorized users.
Container images are likely the most fun part of the Docker experience. Yet, they can also be the most dangerous from a security standpoint. By understanding the various nuances to Docker image security, you can ensure your cloud-native apps are even more secure than your legacy apps ever were.