Setting up & using Terraform Cloud

We considered the 2 options currently available to us for storing our Terraform state file which are AWS S3 buckets and Terraform Cloud. With AWS S3 buckets, we found that we will need a separate folder to configure our S3 bucket which will contain another main.tf file (different from that used for our main resources). Our S3 bucket resource will also need to be specified within this main.tf file.

A DynamoDB table is also needed for locking our state file. The DynamoDB table will require an additional configuration. Without this, there will be state file conflicts when different team members are executing terraform plan or terraform apply at the same time.

To make sure our state file cannot be accidentally altered with AWS, we will need to set some security restrictions which will require another code block within our configuration file. Finally, the use of AWS S3 buckets will incur ongoing service costs.

When taking a look at Terraform Cloud’s features in comparison to S3 buckets, we firstly found that we can use a free Terraform Cloud account. With Terraform Cloud, we will not need to create a separate configuration folder or main.tf file. All we need to do is configure Terraform Cloud as our backend within our Terraform block like so:

terraform {
required_version = ">= 1.4.2"

backend "remote" {
organization = "the-sourceuk"
hostname = "app.terraform.io"

workspaces {
  prefix = "EC2-"
}

}
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}


Terraform Cloud keeps the state file locked so that team members are always working on the most up-to-date version. We also do not need to add security permissions in our state configuration unlike in the case of S3 buckets.

As a small team with minimal resources, an inexpensive option that saves time is always preferred. Therefore Terraform Cloud was our choice for our remote state file management. Here is a documentation for more detail on managing your remote state using Terraform Cloud. If your team is large and your Terraform usage requires advanced security restrictions, you might want to consider using S3 buckets instead. This link can help you get started on this option.

Setting up the AWS access

We previously shared how we set up our AWS account users here . Following the same process, we created a role (called ‘TSU-Terraform-Instances’) in our Preprod and Prod accounts. This is the role which will be assumed when Terraform actions are being carried out. We also attached the relevant AWS-managed policies and trust relationship.

For ease of identification, we always use the naming convention ‘Organisation-Tool-Purpose‘. For example with our TSU-Terraform-Instances role, TSU/ ‘The Source UK’ represents our organisation, ‘Terraform’ represents what tool will be assuming the role and ‘Instances’ represents the fact that this role will be used for creating instances. We highly recommend this model for naming your organisation’s roles, policies etc. – we’ve so far found it helpful!

For Terraform to be able to create our resources, we then set up the user called ‘Terraform-User’ in our security account. Within this Terraform User, we attached an inline policy (inline because it is specific to only this user) which allows permission to assume the TSU-Terraform-Instances role.

Based on AWS principles, we still need our user added into the role’s Trust relationships. Doing this means that whenever our Terraform User makes an API request (for example to create an instance), TSU-Terraform-Instances role can check that our user has the permissions.

Terraform Cloud Workspaces
After creating our Terraform User, inline policy, and the required role on AWS, the next thing we addressed was ‘How do we logically separate our projects?’. According to code best practice (DRY Principle), we also want to avoid repeating code across projects to avoid deviation.

If using AWS to store our state, we would have needed to create separate directories with different configurations for creating our infrastructure.

Instead of separate directories, Terraform Cloud uses workspaces to separate your environments. After setting up our Terraform cloud account (you can follow this link to create an account), we created 2 workspaces, an EC2-PreProd workspace and a EC2-Production workspace:


These workspace names were chosen for consistency because they match the names of our AWS accounts.

Having these 2 separate Terraform cloud workspaces means all of our backend terraform code remains the same except for specific variables which we have set in each workspace such as the AWS Account ID as below:

To give you an example snippet of our infrastructure configured in our network.tf file, we wanted to create 2 NAT Gateways in 2 Public subnets for 2 different private subnets which looks like this:

resource "aws_eip" "nat" {
  count = length(var.public_subnet_cidr_blocks)

  vpc = true
}

resource "aws_nat_gateway" "my_nat" {
  depends_on = [aws_internet_gateway.my_ig]

  count = length(var.public_subnet_cidr_blocks)

  allocation_id = aws_eip.nat[count.index].id
  subnet_id     = aws_subnet.public[count.index].id
}

Deploying our code

We chose to link our GitHub account with Terraform cloud for continuous integration, version control, and to prevent code deviation. See how to connect your GitHub account to Terraform Cloud here.

Once our code was pushed to GitHub, we clicked ‘New Run’ in the Terraform Cloud workspace console (EC2-PreProd in this example) and chose the ‘plan and apply’ run type. The ‘plan and apply’ run type means our infrastructure will be deployed straight to our AWS PreProd account.

As shown in the screenshot, you can choose another run type depending on your needs:

Later, we will also add an instance type workspace variable to both workspaces so we can have a smaller instance type (t2-micro) running in our AWS PreProd account and a larger instance (e.g. c3-large) in our Prod account.

We’ve touched on how we store variables unique to each workspace, but how about our global and sensitive variables?

Our AWS access and secret keys for example are sensitive credentials which we would like to use across all workspaces. Terraform Cloud variable sets are useful for this because it allow us ensure our sensitive variables are never exposed and can be reused across all our workspaces.

We stored our AWS keys under Variable sets like below:

In this blog, we’ve covered why we chose Terraform Cloud as our backend, how we use our Terraform Cloud Workspaces (including for deploying our code to AWS) and the use of Terraform Cloud Workspaces for storing global, unique and sensitive variables.

Feel free to drop us a comment if you have any questions or suggestions on how to use Terraform cloud as your backend.

Leave a Reply0

Your email address will not be published. Required fields are marked *


This site uses Akismet to reduce spam. Learn how your comment data is processed.