microservices

Envoy Service Mesh Proxy and Microservices

6288 VIEWS

In the era of microservices and cloud applications, old and new patterns emerge in order to accommodate changes in architecture layout. These are called Cloud Design Patterns, and are suitable for building reliable, scalable, secure applications in the cloud. In this article, you will learn about Envoy Service Mesh. I’ll walk you through how to create a service proxy definition for a small Flask application. You will also learn what a Service Mesh Design Pattern is and how it works. If you would like to locally run the code samples presented in this article, you’ll need to install Docker anDocker-machine.

The source code for this article can be found on GitHub.

A Mesh or a Mess?

Of course, when you hear this terminology, you may wonder: What is a service mesh and why is it needed?

Generally speaking, a service mesh is an abstraction layer that works alongside the applications and handles concerns like service-to-service communication, resilience patterns like circuit-breakers, or retries and observability patterns like monitoring and tracing. The applications should not have to be aware of the service mesh’s existence.

Specifically for Envoy, we can say that it is “an open source edge and service proxy, designed for cloud-native applications.” It was originally developed by Lyft as a high performance C++ distributed proxy designed for standalone services and applications, as well as for a large microservices service mesh.

It employs what we refer to as a sidecar pattern. A sidecar is a process that gets deployed alongside the application (one-to-one) and your application interacts with the outside world through Envoy Proxy.

This means that, as an application developer, you can take advantage of the features provided by Envoy through configuration files. Envoy also employs several other sidecar-type features like:

  1. Service discovery
  2. load balancing
  3. Retry policies
  4. Circuit breakers
  5. Timeout controls
  6. Back pressure
  7. Metrics/stats collection
  8. Tracing
  9. Rate limit policies
  10. TLS between services
  11. GRPC
  12. Request shadowing

Additionally, this means your applications don’t have to include lots of libraries, dependencies, transitive dependencies, etc., and hope that each developer properly implements these features.
A sidecar is one deployment model of the Service Mesh pattern. There is also a per-host proxy deployment pattern, where one proxy is deployed per host.

In this article, we’re going to look at how Envoy tackles this challenge. Specifically, we’ll define an Envoy Proxy config to handle frontend calls and an Envoy Proxy config to forward our calls to a small Flask application. Let’s start.

Envoy Example Application

For this example we are going to use Docker to set up a simple Envoy proxy cluster for a client and a service. A client is just an Envoy proxy that forwards calls to the “upstream” service. The service is a small Flask application that displays the current date and time. As Envoy is deployed as a sidecar alongside the service, all of the calls go through the Envoy Proxy sidecar.

Let’s create a new machine which will hold the containers:

Next, you’ll create the configuration for the frontend Envoy Gateway. First, create an Envoy config that will act as a frontend proxy server:

 $ mkdir gateway
$ touch gateway/front-proxy-envoy.json

This config file is defined as:

 {
  "listeners": [
    {
      "address": "tcp://0.0.0.0:80",
      "filters": [
        {
          "name": "http_connection_manager",
          "config": {
            "codec_type": "auto",
            "stat_prefix": "ingress_http",
            "route_config": {
              "virtual_hosts": [
                {
                  "name": "app_backend",
                  "domains": [
                    "*"
                  ],
                  "routes": [
                    {
                      "timeout_ms": 0,
                      "prefix": "/",
                      "cluster": "app"
                    }
                  ]
                }
              ]
            },
            "filters": [
              {
                "name": "router",
                "config": {}
              }
            ]
          }
        }
      ]
    }
  ],
  "admin": {
    "access_log_path": "/dev/null",
    "address": "tcp://0.0.0.0:8001"
  },
  "cluster_manager": {
    "clusters": [
      {
        "name": "app",
        "connect_timeout_ms": 250,
        "type": "strict_dns",
        "lb_type": "round_robin",
        "features": "http2",
        "hosts": [
          {
            "url": "tcp://app:80"
          }
        ]
      }
    ]
  }
}

This defines a very simple reverse proxy that will forward all TCP requests from 0.0.0.0:80 to the app’s virtual host entry at port 80. The cluster_manager entry defines connection policies for each node in the cluster.

Next, you’ll create an Envoy config that will act as a backend proxy server. Make sure you match the listener address to the host’s address of the frontend proxy.

 $ mkdir app
$ touch app/app-service-envoy.json

// app/app-service-envoy.json
{
  "listeners": [
    {
      "address": "tcp://0.0.0.0:80",
      "filters": [
        {
          "name": "http_connection_manager",
          "config": {
            "codec_type": "auto",
            "stat_prefix": "ingress_http",
            "route_config": {
              "virtual_hosts": [
                {
                  "name": "app",
                  "domains": [
                    "*"
                  ],
                  "routes": [
                    {
                      "timeout_ms": 0,
                      "prefix": "/",
                      "cluster": "local_service"
                    }
                  ]
                }
              ]
            },
            "filters": [
              {
                "name": "router",
                "config": {}
              }
            ]
          }
        }
      ]
    }
  ],
  "admin": {
    "access_log_path": "/dev/null",
    "address": "tcp://0.0.0.0:8001"
  },
  "cluster_manager": {
    "clusters": [
      {
        "name": "local_service",
        "connect_timeout_ms": 250,
        "type": "strict_dns",
        "lb_type": "round_robin",
        "hosts": [
          {
            "url": "tcp://127.0.0.1:8080"
          }
        ]
      }
    ]
  }
}

Now, let’s define our microservice:

 # app/app.py
from flask import Flask
import settings
import datetime


def init():
    """ Create a Flask app. """
    server = Flask(__name__)

    return server

app = init()


@app.route('/')
def index():
    return “The datetime is {0}”.format(datetime.datetime.now().isoformat())

if __name__ == "__main__":
    app.run(
        host=settings.API_BIND_HOST,
        port=settings.API_BIND_PORT,
        debug=settings.DEBUG)

Now let’s glue everything together with Docker. We will containerize our frontend and backend proxies and apply the Envoy configurations.

Create 2 Dockerfiles one for the frontend and one for the backend:

 $ touch gateway/DockerFile
$ cat < gateway/DockerFile
    FROM envoyproxy/envoy:latest

    RUN apt-get update && apt-get -q install -y \
        curl

    CMD /usr/local/bin/envoy -c /etc/front-proxy-envoy.json --service-cluster front-proxy
EOF

$ touch app/DockerFile
$ cat < app/DockerFile
    FROM envoyproxy/envoy:latest


    RUN apt-get update && apt-get -q install -y \
        curl \
        software-properties-common \
        python-software-properties
    RUN add-apt-repository ppa:deadsnakes/ppa
    RUN apt-get update && apt-get -q install -y \
        python3 \
        python3-pip
    RUN python3 --version && pip3 --version
    
    RUN mkdir /code
    COPY . /code
    WORKDIR /code
    
    RUN pip3 install -r ./requirements.txt
    ADD  ./start_service.sh /usr/local/bin/start_service.sh
    RUN chmod u+x /usr/local/bin/start_service.sh
    
    ENTRYPOINT /usr/local/bin/start_service.sh
EOF

The file start_service.sh starts both the proxy and our Python app:

$ touch app/start_service.sh
$ cat < app/start_service.sh
    #!/bin/bash
    set -xe
    python3 ./app.py & envoy -c /etc/app-service-envoy.json --service-cluster app
EOF

Now we need to build both Docker files and connect them together. There are several ways we can do this. My preferred method is using docker-compose. Here is an example configuration for this:

 $ touch docker-compose.yml
cat< docker-compose.yml
version: '2'
services:

  front-envoy:
    build:
      context: .
      dockerfile: gateway/Dockerfile
    volumes:
      - ./gateway/front-proxy-envoy.json:/etc/front-proxy-envoy.json
    networks:
      - envoymesh
    expose:
      - "80"
      - "8001"
    ports:
      - "8000:80"
      - "8001:8001"

  app:
    build:
      context: ./app
      dockerfile: Dockerfile
    volumes:
      - ./app/app-service-envoy.json:/etc/app-service-envoy.json
    networks:
      envoymesh:
        aliases:
          - app
    expose:
      - "80"

networks:
  envoymesh: {}

EOF

When everything is ready, we can test our setup:

 $ docker-compose up --build -d
$ docker-compose ps
        Name                       Command               State      Ports
-------------------------------------------------------------------------------------------------------------
app_service_1      /bin/sh -c /usr/local/bin/ ...    Up       80/tcp
front-proxy-envoy_1   /bin/sh -c /usr/local/bin/ ...    Up       0.0.0.0:8000->80/tcp, 0.0.0.0:8001->8001/tcp

We can now send a request to our service via the front-envoy and it will be routed to our backend.

 curl -X GET -v $(docker-machine ip default):8000
* Rebuilt URL to: 0.0.0.0:8000/
*   Trying 0.0.0.0...
* TCP_NODELAY set
* Connected to 0.0.0.0 (0.0.0.0) port 8000 (#0)
> GET / HTTP/1.1
> Host: 0.0.0.0:8000
> User-Agent: curl/7.54.0
> Accept: */*
> 
< HTTP/1.1 200 OK
< content-type: text/html; charset=utf-8
< content-length: 24
< server: envoy
< date: Fri, 2 Feb 2018 18:23:50 GMT
< x-envoy-upstream-service-time: 4
< 
* Connection #0 to host 0.0.0.0 left intact
The datetime is 2018-02-02T18:23:50.302492%

The port 8000 is forwarded to the frontend proxy on to our backend proxy at port 80 where our service mesh is working. This request again will be forwarded to port 8080 where our Flask application is running. Notice that our Flask app is unaware of the service mesh. It only gets its config from the environment.

Conclusion

In this article we created an Envoy service proxy for a Flask application. We explored the basic concepts of a service mesh, and we learned details about the sidecar pattern.

A service mesh network of microservices can scale really well, especially when it consists of hundreds and thousands of them.

With tools like Envoy, it is possible to glue microservices together and add resilient patterns without the application’s being aware of the underlying topology. That greatly improves development agility and reduces issues arising between coupling cross-cutting concerns and the application’s core functionality.

If you would like to learn more about Envoy and service mesh architectures, you may refer to the official guide that provides useful examples and API references for all the available configuration options. I hope this article was useful and interesting! Thanks for staying with me today, and until next time ...

Resources

https://www.envoyproxy.io/


Theo Despoudis is a Senior Software Engineer, a consultant and an experienced mentor. He has a keen interest in Open Source Architectures, Cloud Computing, best practices and functional programming. He occasionally blogs on several publishing platforms and enjoys creating projects from inspiration. Follow him on Twitter @nerdokto. Theo is a regular contributor at Fixate IO.


Discussion

Click on a tab to select how you'd like to leave your comment

Leave a Comment

Your email address will not be published. Required fields are marked *

Menu
Skip to toolbar