Serverless is a cloud computing execution model where the cloud provider dynamically manages the allocation and provision of servers. Unlike Docker containers and container cluster technologies such as Kubernetes and Swarm, where we still need knowledge of containers and container clusters, Serverless lets us focus on the core business logic of our project and abstract infrastructure to our cloud vendor. Importantly, Serverless also promises high scalability and availability of an application, enabling multiple containers to be automatically initialized, and allowing as many functions to run as needed to accommodate increasing requests. In this article, we will walk through an end-to-end workflow of automating a CI/CD process for a Python RESTful service with AWS Gateway API and Lambda.
Specifically, in this tutorial, we will do the following:
1.Explain what we’re building
2. Configure a simple serverless hello-word (using the serverless framework to bootstrap the process).
- Configure AWS credentials for serverless
- Install serverless and build the simple serverless Hello World Lambd
3. Add a simple RESTful web service with AWS Gateway API and Lambda.
4. Configure our service for logging errors for easy debugging.
5. Add unit testing to our serverless API.
6. Build our CI/CD using Travis CI.
What we are building
We are going to build a CI/CD pipeline by deploying a simple Python REST API with AWS Lambda. We will configure Lambda functions to handle our application’s logic. Specifically, our application should implement two Lambdas: (1) to randomly return words from a list of words, and (2) to return “hello, world.” Below is an exact depiction of our serverless application architecture; however, a database service such as Amazon DynamoDB will not be used.
Source: medium.com
- All our project assets (project code and dependencies) will be stored on Amazon S3.
- When a client makes a call to any endpoint (essentially Lambda), AWS API Gateway intercepts it.
- AWS API Gateway then passes the request to any Lambda function (where our business logic resides), configured to handle such request.
- Lambda may retrieve data from AWS DynamoDB, and sends a response back with JSON (which will not be implemented in this tutorial).
Let’s get started, shall we?
Configuring our simple serverless “Hello World”
To start building our simple serverless application, we need to have the serverless framework installed to bootstrap the process of creating a serverless service. The serverless framework will automatically create a serverless.yml file with commented configurations, and a default serverless hello-world endpoint from which we can build upon (rather than doing all this ourselves). However, the serverless framework needs access to our cloud provider’s account (AWS), so we will first create an (IAM) user for our serverless framework and assign it the required policies and credentials so that the serverless framework can create and manage resources on our behalf for this user. (For security best practices, our user will be given just the permissions it needs, to limit its capabilities.)
- Configuring AWS credentials for Serverless
(i) Make sure you have an AWS account, with your credit card details properly set up, or else you won’t be able to access AWS resources (your account needs to be verified).
(ii) Log in to your AWS account, and navigate to your Identity and Access Management(IAM) page.
IAM page
(iii) Click on Users (in yellow above) and then Add user. Enter a name in the first field to remind you of the user name (mine is serverless-cli). Enable Programmatic access by clicking the checkbox. Click Next to go through to the Permissions page. Click on Create Policy. Select the JSON tab, and paste the following JSON data here to your JSON tab (delete services you don’t use in the JSON data).
Read more on configuring an IAM user here.
(iv) Download the API Key and the secret Key to your computer.
- Installing the serverless framework
(i) If you have Node.js installed, run npm install -g serverless to install the serverless framework.
(ii) to configure serverless with your AWS credentials, run:
serverless config credentials –provider aws –key <AWS_API_KEY> –secret <AWS_SECRET_KEY>
We can now start using serverless with our cloud provider.
- Creating our simple serverless Hello-World
Run the command sls create –template aws-python to generate a Python serverless boilerplate. (sls short for serverless).
Results from running sls create –template aws-python
Folder structure from serverless command
In the serverless.yml file, let’s add a minimal setup for our service. Change it to look like this:
Minimal service configuration
We have changed our service name to sls-rest-api, and configured a single Lambda function with a broad path, matching to accept HTTP requests from this broad domain.
Our Lambda function is the default hello function from the serverless framework. It simply returns JSON data with events metadata and a message.
Hello Lambda function
And now, with the simple command sls deploy –stage dev our application will be highly available to handle HTTP requests.
Output from deploying our service
You can navigate to the endpoint URL: https://keyqhny41l.execute-api.us-east-1.amazonaws.com/dev
And view the output in your browser:
Response from “hello” Lambda
Create a GitHub repository and push your project to GitHub. We won’t use AWS CodeCommit (Git service provided by AWS) in this tutorial for our CI/CD (but feel free to explore it).
Build a Flask REST API with AWS Gateway API, Lambda
One difficulty with the serverless architecture is incorporating third-party dependencies into your project. In this regard, traditional architecture may be better. To start building our REST API endpoints with Flask, we need two plugins: serverless-python-requirements to handle our third-party dependencies (e.g. Flask) on deployment, and serverless-wsgi for reformatting AWS Gateway API events in a way Flask expects.
We can install these plugins with Serverless from the root directory of our project:
sls plugin install -n serverless-python-requirements
sls plugin install -n serverless-wsgi
Serverless will install these plugins and add them into the project’s package.json and the plugin section of serverless.yml. During deployment, serverless-python-requirements will install all dependencies specified in the requirements.txt file of our project. So let’s create this file in the root directory of our folder and add all third-party dependencies our project relies on, such as the Flask Web Framework.
Now, let’s restructure our Hello endpoint, and again, add a configuration in our serverless.yml file.
Serverless.yml configuration (plugins were added automatically by serverless command)
We have added a function called wsgi.handler (in the wsgi module) which we didn’t create ourselves. This function will be added by the serverless-wsgi module installed during deployment, to reformat our API Gateway HTTP events to Flask in an understandable way for our application to process. We have also configured our application’s entry point to be the app object we have instantiated in our handler.py file (don’t confuse the names). The app object will then route requests to our various endpoints (such as ‘/’ for hello() lambda).
Hello Lambda function
Configure our service for logging errors for easy debugging
Before proceeding to add another Lambda to randomly return words from a list of words, let’s make our application error-proof by configuring our service to log errors for easy debugging. You can configure your service for local development if you prefer (we won’t cover that in this tutorial).
(i) Navigate to the AWS API Gateway https://console.aws.amazon.com/apigateway
(ii) You should have deployed at least once before configuring the error log. Under the API section, select your application. Under your application, select stages and click on the stage you wish to configure to log errors ( dev in my case).
(iii) In the stage editor, under the Logs/Tracing tab, check all options under the CloudWatch Setting, and click the Save button.
Configuring API Gateway for error logging
Now, accessing any endpoint in our application will log the application’s metadata or errors to our CloudWatch console, under the Logs section. So let’s access our application endpoint and view the metadata or errors (if we have any).
Error logging in CloudWatch
We could also configure each endpoint or Lambda to log errors in a separate log group. This can be done in the serverless.yml file.
Adding more endpoints
We will now add our Lambda function to randomly return a sentence using words from a list of words, each time it is invoked:
Final project code
Finally, in our functions section of the serverless.yml file, we will configure these endpoints as Lambdas to handle specific requests:
Specifying Lambdas to handle requests
Let’s deploy our functions and try to test the endpoints using our browser, since all our endpoints use GET requests.
Deploy works successfully
From the browser, our Lambdas return a response successfully.
Response successfully generated from getStatus Lambda
Output from hello Lambda
Our application is working pretty well. Commit all changes and push to GitHub.
Creating automated unit tests
CI/CD requires that basically everything in your application be automated, so we’ll implement two tests for our Lambdas. We will use Unittest and create two test cases for our application. Since no external AWS service is integrated into our project, we can easily test our application.
The first test case ensures that our hello Lambda returns a “Hello, world” string and a statuscode of 200. And our second test case ensures that the getStatus Lambda returns a bunch of text and a statuscode of 200 as well.
Test cases for Lambdas
Running our tests locally gives all pass:
Running tests locally
Finally, we’ll look at CI/CD for our serverless application with Travis CI.
CI/CD with Travis CI
We will use Travis CI to orchestrate the build process, and then run automated tests. When all tests pass, our serverless application will be deployed to AWS.
(i) You can quickly create an account on Travis CI through GitHub, and Travis will display all your public repositories.
(ii) The Travis CI configuration is specified in a .travis.yml file in the root directory of your project. So create a .travis.yml file and add the following configuration to it:
.travis.yml configuration
Travis, in the installation phase of each build, will install all dependencies listed in our requirements.txt file, and the other dependencies in the install section. It will then run all our test suites, and upon successful test, will deploy our serverless app to AWS.
Commit all changes and push to GitHub.
(iii) Go to your repositories in your Travis CI account, and click the slider next to your serverless application repository to activate it for build.
(iv) Now, configure Travis CI with your serverless AWS credentials to be able to deploy to AWS, just as we did on our local computer with serverless deploy . To do this, navigate to the settings of your serverless application repository, and under environment variable of your repository settings, add your API Key and the Secret Key:
Setting AWS credentials on Travis CI
(v) Let’s commit our changes and push to GitHub.
Log from Travis CI build
The above image shows a successful build and deployment.
Our pipeline is good. Anytime you commit, Travis will pick your project in GitHub, run all the automated tests, and deploy to AWS. Software development couldn’t be any easier!
Access the final version of the project source code from GitHub.
References
https://hackernoon.com/what-is-serverless-architecture-what-are-its-pros-and-cons-cc4b804022e9
https://serverless.com/blog/flask-python-rest-api-serverless-lambda-dynamodb/
http://joshuaballoch.github.io/testing-lambda-functions/
https://serverless.com/framework/docs/providers/aws/guide/credentials/
https://blog.morizyun.com/python/library-boto3-aws-dynamodb.html
https://kennbrodhagen.net/2016/07/23/how-to-enable-logging-for-api-gateway/