HashiCorp introduced the Terraform testing framework with version 1.06 of Terraform. This note captures my experience learning and adding test cases to an existing Terraform configuration using Azure DevOps Pipelines. Along with my notes, you’ll also find references to helpful guides and YouTube videos. By the end of this note, I want you to feel confident about implementing the Terraform testing framework for your Terraform configuration project. Testing in Terraform is crucial to ensure that infrastructure as code (IaC) is reliable, consistent, and safe to deploy. It helps catch errors early, improves confidence in the code’s functionality, and prevents issues in production environments later. By leveraging the Terraform Testing framework, cloud engineering teams can automate and enforce best practices while avoiding potentially expensive mistakes.
The Guides I found the following two blogs and the three YouTube videos sufficient to use the Terraform testing framework.
Blogs: The best resource to start understanding the philosophy behind the Terraform testing framework is the HashiCorp Product & Technology blog: Testing HashiCorp Terraform. This blog offers in-depth insights into the testing capabilities of the framework. If you are short on time, please revisit the article later. Another excellent resource is Mattias’s blog - A Comprehensive Guide to Testing in Terraform: Keep your tests, validations, checks, and policies in order.
YouTube Videos: Apart from the detailed blog, there are three YouTube videos on the topic that I highly recommend. These are: 1.Using the Terraform Test Framework by Ned Bellavance 2. Practical and inexpensive ways to test infrastructure deployments using Terraform by Gabe Maentz 3. Automating Tests for Terraform by KZ Li Empowered by what I learned from the guides, I added the Terraform testing framework to the GitHub: terraform-aws-vpc module. I have the high-level sections listed below.
Set-Up:
Installer: Since HashiCorp introduced the Terraform testing framework with version 1.0.6, I first upgraded the Terraform version on my local and then updated the azure-pipelines.yaml
file to install a version higher than that as part of the job.
Tests Folder: The Terraform testing framework, by default, expects the test cases to be in the tests folder when the terraform test
command is run. Hence, I created that folder to store the .tftest.hcl
files. In your case, if, due to specific reasons, the tests folder cannot exist there, then the terraform test
command will require an additional flag with -test-directory=path-to-test-directory
.
Add Tests:
Test configurations are written in HCL, and multiple .tftest.hcl
files can exist in the test folder. These test files are executed alphabetically by filename. Each test configuration in a file has a provider
block, a variables
block, and one or more run
blocks. You must have learned the purpose of these constructs from the reference guides above. At its core, the terraform testing framework uses the provider information, updates the variable’s values, and then runs a terraform plan
or terraform apply
command based on the value specified inside the command
in a run
block. Then, it uses the assert {}
block to compare if the value of a variable or property in the terraform plan or provisioned infrastructure matches with what is expected.
For example, if you examine the above image, there are two
assert {}
blocks to check if the Amazon VPC was provisioned correctly. It does so because the command = apply
ensures the Amazon VPC was created in a specific AWS account. If an Amazon VPC exists, it’ll have an ID and an ARN; the above two assert {}
blocks are checking for that. You can find several testing constructs in the tests
folder in the GitHub repository.
Automate with Azure Pipelines:
Automated testing frameworks allow cloud engineering teams to validate the Terraform configurations continuously, ensuring that infrastructure changes are safe and aligned with desired outcomes before deployment. By integrating testing into the CI/CD pipeline, teams can catch issues earlier, leading to faster, more reliable releases while maintaining the integrity of their infrastructure code. After adding the test cases, I automated the Terraform test framework via Azure Pipelines. The Azure Pipeline YAML consists of three jobs: Validate, Test, and Provision.
The first job (Validate) runs the
terraform init
, terraform validate
, and the terraform fmt
checks to ensure that the Terraform configuration passes initialization (setting up the remote backend), ensures no syntax errors and provides consistent and standardized code style.
The second job (Test) runs parallel to the first job. I used a separate AWS account to test the Terraform configuration and was assured there wouldn’t be any resource conflict between the two jobs. I added the new job: Test
to the existing Azure Pipelines YAML.
As you can examine the image above, there are only two steps in the job; not counting the installation step. The test cases stored in the tests folder at the root were executed alphabetically by filename. Within each test file, run blocks were executed sequentially in the order they appeared.
Here is an image of the test results in the Azure DevOps Pipeline logs. Please choose the latest run → Jobs → Test Terraform Configuration.
As you can examine, all 30 tests spread across the four
.tftest.hcl
files were successful.
Finally, the last job (Provision) was to create all the AWS cloud resources using the terraform apply
command, provided the branch was main
. If you are interested in learning about using Terraform with Azure DevOps YAML pipeline, please refer to azure-pipelines-yaml-and-terraform. I enabled a condition in the job: Provision
such that it is enabled only when the previous two jobs pass.
Using the above steps, I enabled test cases for the GitHub: terraform-aws-vpc module using Azure DevOps YAML pipelines. While creating this solution, I iterated over several best practices of this use case.
Best Practices with Terraform Test Cases Below are some best practices I implemented while writing Terraform test cases while provisioning AWS cloud resources:
1. Provision Cloud Resources in Separate AWS Accounts: To ensure that the test cases do not interfere with production or other environments (e.g., Dev or Test), it is necessary to provision the test cloud resources in isolated AWS accounts. This segregation minimizes the risk of impacting the live environment during testing.
2. Automate Terraform Test Runs (via Pipeline): Automating Terraform test executions through a CI/CD pipeline is essential. This ensures that the test cases run consistently and pass before any Terraform configurations are applied to a live environment in an AWS account (terraform apply
). Automation helps catch issues early in the process, reducing human error.
3. Include a run block with the apply command: Ensure at least one run block with the command = apply
statement in your test cases. This ensures that you explicitly define the behavior of provisioning cloud resources, avoiding any surprises when deploying resources to Dev, Test, or Production environments.
In conclusion, the Terraform testing framework empowers the cloud engineering team to ensure the high reliability and safety of Terraform configurations. I hope this note provided valuable insights and encouraged you to explore the Terraform Testing framework further. If you have any questions or suggestions, please use the comment section below. Given that this is a relatively new concept, your input on a few open questions would be particularly valuable:
job: Test
to execute the Terraform tests. Would a separate pipeline be more beneficial, or does it make sense to integrate the tests within an Azure Pipelines job that also handles configuration deployment?terraform apply
step? Or would it be prudent to run the tests with all these scenarios?Your feedback will be insightful as I refine my understanding of implementing the Terraform testing framework.