Terraform: Cloud made easy


· ·

Terraform, by Hashicorp, is an automation tool that possesses the ability to build, change, and version cloud and on-premise infrastructure simply and effectively. Terraform manages your infrastructure, as code. Terraform is your organization’s ticket into the cloud.

Resources: Repository | Release Notes

This blog provides a view into the best practices to follow when setting up your Terraform repository, inclusive of the how and why. Setting up your Terraform repository in such a way that allows a orchestration into many different cloud providers, enables separate (safe) management between compute and networking infrastructure while also giving you a extremely reusable setup is paramount. It will fully unlock the magic of Terraform, a tool that could allow you to build a platform of hundreds of nodes that are configured to your desired state in minutes, as well as giving you a control panel to maintain that existing infrastructure for the future.

Directory Structure
From here on in, it will be a step-by-step overview of how a Terraform repository should be structured and why. This is the base of the Terraform repository, inclusive of sub-directories that are named after the cloud providers that you wish to deploy into using Terraform.


Within each of the cloud provider sub-directories you would then have two further sub-directories: environments and modules.


Modules are your reusable pieces of code for both compute and networking. They have variable values abstracted from them completely to allow maximum reusability.

A typical compute node module would consist of the following three files:

${module_name}.tf representing your main class, inclusive of the actual instance resource declarations and any resources related to them (security groups for example)


output.tf declaring the outputs that will be provided by the module, representing variables that can be access by other modules called within an environment.

variables.tf declaring all of the variables to be passed into the modular resources. This file is responsible for pulling through the environment specific variable values that are passed through to the module, ready to be used.

If a module consists of a large amount of resources, it would be logical to split the main class out into separate .tf files that represent the resources, output.tf and variables.tf would remain. For a typical networking resource module it is very much a repeat of a compute node module, inclusive of three important files, unless you have a module consisting of a large amount of resource declarations.

Now for the organization specific setup: environments. This setup just forms an example set of environments: dev, test, preprod and prod.

Each environment should have separate subdirectories for compute and networking.

This is because both the compute and networking sub-directories will have their own unique Terraform statefile (terraform.tfstate), meaning that they are managed by Terraform separately and are uncoupled from each other. In other words, compute can be created, destroyed and recreated a thousand times on top of networking infrastructure that was only created once.

Storing this terraform.tfstate file is critical to the management of your infrastructure from the point of creation, therefore an approach of doing this is a critical decision that is dependant on the organisational use case.

A compute subdirectory of an environment consists of a series of .tf classes that reference the modules that are to be used in this environment (in this case, demo_nodes.tf), including the code that enables the passing through of tailored variables through to the modules. Running a ‘terraform apply’ from this directory would create the resources detailed, running ‘terraform apply’ would destroy them. But before running ‘terraform apply’, a ‘terraform get’ would have to be executed in order to pull through the required modules.


The terraform.tfvars file is where the hard-coded variables for the environment should be entered, including the number of replications of a module you desire, the AMIs you wish a module to be built on or the VPC to place your compute onto, as an example of compute variables.

It’s also worth noting that Terraform can pick-up Terraform-specific environment variables, so hardcoding them into terraform.tfvars may not be needed. An example of doing this for a variable called ‘x’ would require the command ‘export TF_VAR_x=1234’, Terraform will now use the value 1234 for the variable ‘x’.

Networking environments have the exact same setup as described for compute, the only difference is the content of the .tf classes and the variables that are passed.

As can be seen in the image, this directory has its own terraform.tfstate file, meaning that this directory itself is the control panel for manipulating the dev networking infrastructure, dev compute will also have its own terraform.tfstate and this should be replicated across all environments.


That’s it for Part 1, but remember to take a look at the repository, read the READMEs and follow the demos to build some resources into AWS. Next up: Basic Compute in Microsoft Azure, Google Cloud & Rackspace.

Do you think you can beat this Sweet post?

If so, you may have what it takes to become a Sweetcode contributor... Learn More.

Jordan Taylor is a DevOps Practitioner. His goal is to learn every DevOps tool and technology, developing an arsenal of knowledge that covers every aspect of the DevOps space. With a specialization in automation, configuration management, cloud orchestration and CI/CD, Jordan is always looking to implement forward-thinking ideas that result in ultimate efficiency and value, while up-skilling and enabling those around him in the technologies used to innovate. Jordan's current favorite tools are Terraform, Docker and Vault.


Click on a tab to select how you'd like to leave your comment

Leave a Comment

Your email address will not be published. Required fields are marked *