Securing your CoreOS Container

2313 VIEWS

· · · · ·

f you are reading this, you probably know about CoreOS and what it does. And you may be looking for a way to improve the security of your CoreOS container.

There’s good news: You can. The beauty of CoreOS is that it is super lightweight and customizable. Anyone can use CoreOS to run what they want, without fluff.

In this post, I’ll demonstrate how to use the customizability of CoreOS to improve security.

Overview

Let’s imagine that we have an application that contains multiple microservices inside it, working in sync. The application heavily relies on the data that is being sent in and out to a third-party server which we cannot control.

Our target here is to create and execute a strategy that will allow us to stick to a tight security protocol, without any loose ends. It will follow just two broad checkpoints:

• Communicate only on ports that are absolutely necessary.
• Make all the microservices work in sync, without exposing them to the Internet.

We are going to achieve this strategy with three steps to run a secure CoreOS container in a production environment.

1. Secure iptables
2. Enforce etcd
3. Use reverse proxy app servers (like NGINX)

Before we jump in, please familiarize yourself with configuring the `cloud-config` file.

Secure Iptables

This is the first and crucial step towards securing your CoreOS containers. Here’s what we will do in this step:

• Reject all incoming traffic by default
• Listen and allow traffic on ports that are absolutely necessary
• Allow freely moving traffic to the outside world from the container

This way, we control what goes in and comes out of the containers using traditional Iptables. The following are the code snippets for these three steps. These go into your `cloud-config` YAML file.

Let’s start by blocking all the incoming traffic by default. This piece of code will go under the `content:` section of the `write_files:` block in your `cloud-config` YAML:

 :INPUT DROP [0:0]

This enforces the container to drop all the incoming traffic by default, which is an extreme solution to a problem. In the next step, we will create a “whitelist” of ports that can listen to the outside world.

For our scenario, these are the ports that we need:

• Port 80 for allowing HTTP traffic for the end user
• Port 443 for allowing HTTPS traffic for the end user
• Port 3000 to allow incoming traffic from the third-party API servers
• Port 3001 to allow incoming traffic from the third-party API servers

Here’s the general syntax that allows this (assuming that everything is TCP traffic):

 -A INPUT -p tcp -m tcp --dport  -j ACCEPT

So in our case, this would be:

 -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 443 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 3000 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 3001 -j ACCEPT

There’s a lot more you can do with your `cloud-config` YAML. Head over to the docs to find out!

Now that we have restricted incoming traffic, we need to shift our focus to the outgoing traffic from the container. Let’s now set the container to access any port it wants by adding this to our `cloud-config` file’s `content:` section:

 :OUTPUT ACCEPT [0:0]

Finally, don’t forget to verify if everything is proper by running:

 sudo iptables -nvL

With this, we’ve secured the traffic flow that goes in and out of the container!

Enforce etcd

If you are not fully aware of what etcd is, you can learn more about it here.

One of the first things to be noticed with etcd is the use of HTTP by default for transferring data. This is not a secure option, so we’re going to force etcd to make use of TLS/SSL to send and receive traffic in and out of the container.

Here are the steps we would take to achieve this:

1. Get a new discovery URL from etcd.io:
a.

 curl -w "\n" "https://discovery.etcd.io/new?magnitude=3"

2. Add this to the `cloud-config` YAML:

 etcd2:
https://discovery.etcd.io/974a7f7e95e8cc49d0db22ae127f6184
advertise-client-urls: "https://$public_ipv4:2379" # change this to your desired IP and port
initial-advertise-peer-urls: "https://$private_ipv4:2380" # change this to your desired IP and port
listen-client-urls: "https://0.0.0.0:2379,https://0.0.0.0:4001" # change this to your desired IP and port
listen-peer-urls: "https://$private_ipv4:2380,https://$private_ipv4:7001" # change this to your desired IP and port
 units:
- name: etcd2.service
command: start
- name: iptables-restore.service
enable: true
command: start

This will route all traffic through TLS/SSL. But there’s more to this story. We now need to generate the SSL certificates and add them to our environment before we push this live.

Use the code block below, after you generate the SSL certificates for your container:

write_files:

  - path: /run/systemd/system/etcd2.service.d/30-certificates.conf
permissions: 0644
content: |
[Service]
# client environment variables
Environment=ETCD_CA_FILE=/home/core/ca.pem
Environment=ETCD_CERT_FILE=/home/core/coreos.pem
Environment=ETCD_KEY_FILE=/home/core/coreos-key.pem
# peer environment variables
Environment=ETCD_PEER_CA_FILE=/home/core/ca.pem
Environment=ETCD_PEER_CERT_FILE=/home/core/coreos.pem
Environment=ETCD_PEER_KEY_FILE=/home/core/coreos-key.pem
- path: /run/systemd/system/fleet.service.d/30-certificates.conf
permissions: 0644
content: |
[Service]
# client auth certs
Environment=FLEET_ETCD_CAFILE=/home/core/ca.pem
Environment=FLEET_ETCD_CERTFILE=/home/core/coreos.pem
Environment=FLEET_ETCD_KEYFILE=/home/core/coreos-key.pem
```

Use reverse proxy app servers

The final step in our strategy to secure our CoreOS container is making sure pods are securely tucked inside our container—without exposing any traffic to the outside world. This is going to be critical, since some pods need access to the Internet.

To make sure all the pods are communicating with each other without having to rely on an external source, we’re going to place an application server right in front of them. This way, whenever a pod wants to communicate, all traffic is routed through an app server (NGINX in our case). The app server handles which port to utilize for which traffic. So here, each namespace will have its own listening port with which it will be able to communicate to the appropriate pod.

To sum things up, here’s what we did:

1. Created an Iptable configuration in your `cloud-config` to control the traffic that flows in and out of the container

2. Enforced etcd to utilize TLS/SSL with your certificates to transfer data only via secure channel

3. Used a reverse proxy like NGINX or gogeta to route inter-pod communication traffic in real-time without having to restart a process


Swaathi Kakarla is the co-founder and CTO at Skcript She enjoys talking and writing about code efficiency, performance and startups. In her free time she finds solace in yoga, bicycling and contributing to open source. Swaathi is a regular contributor at Fixate IO.


Discussion

Click on a tab to select how you'd like to leave your comment

Leave a Comment

Your email address will not be published. Required fields are marked *

Menu
Skip to toolbar