Understanding Terraform Workspaces
Requirements
For this post, we will use Terraform 0.12.25
and AWS. You will also need to create a Bucket accessible from the internet to store your state files.
What are workspaces
Workspaces are relevant in a very particular use case. In a situation where a team is supposed to maintain environments that need to have the same infrastructure, we would often create different folders to match the targeted environment and replicate the code through it. This method of handling IaaC is not wrong and allows you to divert from the initial configuration if you need it. But if you want to have the same infrastructure across a set number of environments, workspaces are the way to go.
How do they work
Workspaces allow you to use the same code base across multiple environments. Environments can refer to AWS Regions, AWS Accounts, GCP Projects, etc... This feature heavily uses the variabilization of your code to deploy it to a selected set of environments. To add more details, one workspace will correspond to one environment.
How to use them
Presenting the context
Let's try this feature out, we will work with two environments hosted in the same AWS account:
$ tree
.
├── backend.tf
├── provider.tf
├── vars
│ ├── prod.tfvars
│ └── staging.tfvars
├── vars.tf
└── vpcs.tf
As you can see, we will have the prod and staging environments to deploy our code. This code also contains three variable files:
-
vars.tf
: This file will host variables related to the code in general, it will be used through all workspaces. For example, with variables in this, you would be able to change the instance type of yourEC2
instances in all your environments. vars/prod.tfvars
: The variable stored in this file will be applied in the production environment.vars/staging.tfvars
: Same as the previous file, but for staging.
You can use the following files to try on your side:
### backend.tf
terraform {
backend "s3" {
bucket = "terraform-backend-test"
region = "eu-west-1"
workspace_key_prefix = ""
}
}
### provider.tf
provider "aws" {
version = "~> 2.0"
region = "eu-west-1"
shared_credentials_file = "/home/jlops/.aws/creds"
}
### prod.tfvars
variable vpc_cidr = "10.0.0.0/16"
### staging.tfvars
variable vpc_cidr = "10.1.0.0/16"
### vars.tf
variable vpc_cidr { description = "vpc 10.x.x.x/16 specific cidr }
### vpcs.tf
resource "aws_vpc" "main" {
cidr_block = var.vpc_cidr
tags = {
Name = "${terraform.workspace}"
}
}
Let's take a look at the backend file. We host our backend on S3 via a bucket. The only thing that differs from a regular backend is the workspace_key_prefix
argument. It lets Terraform know the following code will be instantiated in different workspaces. As this argument will be left empty, Terraform will use the names of the workspaces you will create later to add a prefix to the path of the state files stored in S3.
This means that the path to your state file inside of the S3 bucket will look something like this for the staging workspace:
https://terraform-backend-test.s3.eu-west-1.amazonaws.com/staging/terraform.tfstate
Creating and Using your workspaces
First, you will need to initialize Terraform:
$ terraform init
Then create both workspaces using those commands:
$ terraform workspace new staging
$ terraform workspace new prod
You can now check if they show up using:
$ terraform workspace show
It is now possible to use them and apply the usual routine to create your infrastructure by selecting them to let Terraform know which environment should be impacted by your changes:
$ terraform workspace select staging
$ terraform fmt
However, Terraform will not be able to know by itself which variable file it should use to create the infrastructure you need per environment, that is why you should use the -var-file
associated with the path of the variable file when you plan, apply, or destroy your code as follows:
$ terraform plan -var-file=staging.tfvars
Add better visibility in your shell
If you use zsh
with ohmyzsh
as your main shell for everyday tasks. You can add plugins into your .zshrc
file that will let you then show in what workspace you are located right now in your Terraform project.
To add this plugin, go into your .zshrc
file and add terraform
to the following line (be sure to leave a space between each plugin on the list, you should already have the git
plugin installed):
plugins=(git terraform)
Then open the theme file linked to your zsh
shell, for me .oh-my-zsh/themes/robbyrussell.zsh-theme
and be sure to add the following at the end of the value linked to the PROMPT+
key:
%{$fg_bold[white]%}$(tf_prompt_info)%{$reset_color%}$
You should now be able to see the workspace as such: