This post is a success story of one imaginary news portal, and you’re the happy owner, the editor, and the only developer. Luckily, you already host your project code on GitLab.com, and know that you can run tests with GitLab CI. Now you’re curious if it can be used for deployment, and how far you can go with it.
To keep our story technology stack-agnostic, let’s assume that the app is just a set of HTML files. No server-side code, no fancy JS assets compilation.
The destination platform is also simplistic—We will use Amazon S3.
The goal of this article is not to give you a bunch of copy and pasteable snippets. The goal is to show principles and features of GitLab CI so that you can easily apply them to your technology stack.
Let’s start from the beginning: There’s no CI in our story yet.
A Starting Point
Deployment: in your case, it means that a bunch of HTML files should appear on your S3 bucket (which is already configured for static website hosting).
There are a million ways to do it. We’ll use the awscli library, provided by Amazon.
The full command looks like this:
aws s3 cp ./ s3://yourbucket/ --recursive --exclude "*" --include "*.html"
Pushing code to the repository & deploying are separate processes.
Important detail: The command expects you to provide the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables. Also, you might need to specify the AWS_DEFAULT_REGION.
Let’s try to automate it using GitLab CI.
First Automated Deployment
With GitLab, there’s no difference in what commands to run. You can set up GitLab CI according to your needs as if it was your local terminal on your computer. As long as you execute commands there, you can tell CI to do the same for you in GitLab. Put your script into .gitlab-ci.yml and push your code. That’s it. CI triggers a job and your commands are executed.
Let’s add some context to our story: Our website is small. There are 20-30 daily visitors, and the code repository has only one branch: the master.
Let’s start by specifying a job with the command from above in .gitlab-ci.yml:
deploy: script: aws s3 cp ./ s3://yourbucket/ –recursive –exclude “*” –include “*.html”
No luck Build failed with /bin/bash: line 47: aws: command not found
It’s our job to ensure that there is an aws executable. To install awscli we need pip, which is a tool for Python packages installation. Let’s specify the Docker image with Python preinstalled, which should contain pip as well:
deploy: image: python:latest script: – pip install awscli – aws s3 cp ./ s3://yourbucket/ –recursive –exclude “*” –include “*.html”
You push your code to GitLab, and it is automatically deployed by CI.
The installation of awscli extends the job execution time, but it’s not a big deal for now. If you need to speed up the process, you can always look for a Docker image with awscli preinstalled, or create an image yourself.
And let’s not forget about these environment variables, which you’ve just grabbed from the AWS Console:
variables: AWS_ACCESS_KEY_ID: “AKIAIOSFODNN7EXAMPLE” AWS_SECRET_ACCESS_KEY: “wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY” deploy: image: python:latest script: – pip install awscli – aws s3 cp ./ s3://yourbucket/ –recursive –exclude “*” –include “*.html”
It should work—However, keeping secret keys open, even in a private repository, is not a good idea. Let’s see how we can deal with that.
Keeping Secret Things Secret
GitLab has a special place for secret variables: Settings > Variables
Whatever you put there will be turned into environment variables. Only an administrator of a project has access to this section.
We could remove the variables section from our CI configuration. However, let’s use it for another purpose.
Specifying and Using Non-Secret Variables
When your configuration gets bigger, it’s convenient to keep some of the parameters as variables at the beginning of your configuration. Especially if you use them in multiple places. Although it is not the case in our situation yet, let’s set the S3 bucket name as a variable for demonstration purposes:
variables: S3_BUCKET_NAME: “yourbucket” deploy: image: python:latest script: – pip install awscli – aws s3 cp ./ s3://$S3_BUCKET_NAME/ –recursive –exclude “*” –include “*.html”
So far so good:
Because the audience of your website has grown, you’ve hired a developer to help you. Now you have a team. Let’s see how teamwork changes the workflow.
Dealing with Teamwork
There are now two of you working in the same repository. It is no longer convenient to use the master branch for development. You decide to use separate branches for both new features and new articles, and merge them into the master when they are ready.
The problem is that your current CI config doesn’t care about branches at all. Whenever you push anything to GitLab, it will be deployed to S3.
Preventing this is straightforward. Just add only: master to your deploy job.
You don’t want to deploy every branch to the production website.
It would also be nice to preview your changes from feature branches somehow.
Setting Up a Separate Place for Testing
Patrick (the guy you recently hired) reminds you that there is a thing called GitLab Pages. It looks like a perfect candidate for a place to preview your work in progress.
To host websites on GitLab Pages, your CI configuration should satisfy these simple rules:
- The job should be named pages.
- There should be an artifacts section with a public folder in it.
- Everything you want to host should be in this public folder.
The contents of the public folder will be hosted at http://
After applying the example config for plain html websites, the full CI configuration looks like this:
variables: S3_BUCKET_NAME: “yourbucket” deploy: image: python:latest script: – pip install awscli – aws s3 cp ./ s3://$S3_BUCKET_NAME/ –recursive –exclude “*” –include “*.html” only: – master pages: image: alpine:latest script: – mkdir -p ./public – cp ./*.html ./public/ artifacts: paths: – public except: – master
We specified two jobs. One job deploys the website for your customers to S3 (deploy). The other one (pages) deploys the website to GitLab Pages. We can name them “production environment” and “staging environment,” respectively.
All branches, except the master, will be deployed to GitLab Pages.
Introducing Environments
GitLab offers support for environments, and all you need to do is specify the corresponding environment for each deployment job:
variables: S3_BUCKET_NAME: “yourbucket” deploy to production: environment: production image: python:latest script: – pip install awscli – aws s3 cp ./ s3://$S3_BUCKET_NAME/ –recursive –exclude “*” –include “*.html” only: – master pages: image: alpine:latest environment: staging script: – mkdir -p ./public – cp ./*.html ./public/ artifacts: paths: – public except: – master
GitLab keeps track of your deployments, so you always know what is currently being deployed on your servers:
GitLab provides a full history of your deployments for every environment:
Now, with everything automated and set up, we’re ready for new challenges that are just around the corner.
Deal with Teamwork Part 2
It has just happened again. You’ve pushed your feature branch to preview it in Staging; a minute later Patrick pushed his branch, so the staging was rewritten with his work. (Aargh!! The third time today!)
Here’s an idea! Let’s use Slack to notify us of deployments so that people will not push their stuff if another one has just been deployed!
Slack Notifications
Setting up Slack notifications is a straightforward process.
The whole idea is to take the Incoming WebHook URL from Slack…
…and put it into Settings > Services > Slack together with your Slack username:
Since the only thing you want to be notified of is deployments, you can uncheck all the checkboxes except “Build” in the settings above. That’s it! Now you’ll be notified of every deployment:
Teamwork at Scale
As time passes, your website becomes really popular, and your team grows from 2 to 8 people. People develop in parallel, so the situation where people wait for each other to preview something in Staging has become pretty common. And “Deploy every branch to Staging” has stopped working.
It’s time to modify the process one more time. You and your team agree that if someone wants to see his/her changes on the staging server, he/she should first merge the changes to the “staging” branch.
The change to .gitlab-ci.yml is minimal:
except:
– master
Is now changed to:
only:
– staging
People have to merge their feature branches before the staging preview.
Of course, additional time and effort is required for merging, but everyone agrees that it is better than waiting.
Handling Emergencies
You can’t control everything. Sometimes things go wrong. Someone merged branches incorrectly and pushed the result straight to production exactly when your site was at the top of HackerNews. Thousands of people see your completely broken layout instead of your shiny main page.
Luckily, someone found the Rollback button, so the website is fixed a minute after the problem was discovered.
Rollback relaunches the previous job with the previous commit.
You feel that you need to react to the problem and decide to turn off “auto deployment to production” and switch to manual deployment. To do that, you need to add when: manual to your job.
As you expect, automatic deployment to Production is stopped. To deploy manually, go to Pipelines > Builds, and click the button:
Finally, your company has turned into a corporation. You have hundreds of people working on the website, so all previous compromises are no longer working.
Review Apps
The next logical step is to boot up a temporary instance of the application per feature branch for review.
In our case, we set up another bucket on S3 for that. The only difference is that we copy the contents of our website to a folder labeled with the name of the development branch, so that the URL looks like this:
http://
Here’s the replacement for the pages job we used before:
review apps: variables: S3_BUCKET_NAME: “reviewbucket” image: python:latest environment: review script: – pip install awscli – mkdir -p ./$CI_BUILD_REF_NAME – cp ./*.html ./$CI_BUILD_REF_NAME/ – aws s3 cp ./ s3://$S3_BUCKET_NAME/ –recursive –exclude “*” –include “*.html”
The interesting thing is where we got this $CI_BUILD_REF_NAME variable from. GitLab predefines many environment variables so that you can use them in your jobs.
Note that we defined the S3_BUCKET_NAME variable inside the job. You can do this to rewrite top-level definitions.
Visual representation of this configuration:
The details of the Review Apps implementation depend widely on your real technology stack and on your deployment process, which is out of the scope of this blog post.
It will not be as straightforward as our static HTML website. For example, you had to make these instances temporary, and booting up these instances with all required software and services automatically on the fly is not a trivial task. However, it is doable, especially if you use Docker, or at least Chef or Ansible.
We’ll cover deployment with Docker in another blog post. To be fair, I feel a bit guilty for simplifying the deployment process to a simple HTML file copying, and not adding some hardcore scenarios. If you need some right now, I recommend you read the article “Building an Elixir Release into a Docker image using GitLab CI.”
For now, let’s talk about one final thing.
Deploying to Different Platforms
In real life, we are not limited to S3 and GitLab Pages. We host, and therefore, deploy our apps and packages to various services.
Moreover, at some point, you could decide to move to a new platform, and thus may need to rewrite all your deployment scripts. You can use a gem called dpl to minimize the damage.
In the examples above, we used awscli as a tool to deliver code to an example service (Amazon S3). However, no matter what tool and what destination system you use, the principle is the same—You run a command with some parameters and somehow pass a secret key for authentication purposes.
The dpl deployment tool utilizes this principle, and provides a unified interface for this list of providers.
Here’s how a production deployment job would look if we use dpl:
variables: S3_BUCKET_NAME: “yourbucket” deploy to production: environment: production image: ruby:latest script: – gem install dpl – dpl –provider=s3 –bucket=$S3_BUCKET_NAME only: – master
If you deploy two different systems or change destination platforms frequently, consider using dpl to make your deployment scripts look uniform.
Summary
- Deployment is just a command (or a set of commands) that is regularly executed. Therefore, it can run inside GitLab CI.
- Most often, you’ll need to provide some secret key(s) to the command you execute. Store these secret keys in Settings > Variables.
- With GitLab CI, you can have more flexibility in how you specify which branches to deploy to.
- If you deploy to multiple environments, GitLab will conserve the history of deployments, which gives you the ability to rollback to any previous version.
- For critical parts of your infrastructure, you can enable manual deployment from the GitLab interface, instead of automated deployment.