Guides writing Terraform tests in .tftest.hcl: run blocks, assertions, provider mocks, module validation, plan/apply modes, and CI/CD pipelines.
From terraform-code-generationnpx claudepluginhub hashicorp/agent-skills --plugin terraform-code-generationThis skill uses the workspace's default tool permissions.
references/CI_CD.mdreferences/EXAMPLES.mdreferences/MOCK_PROVIDERS.mdGuides browser automation with Playwright, Puppeteer, Selenium for e2e testing and scraping. Teaches reliable selectors, auto-waits, isolation to fix flaky tests.
Provides checklists to review code for functionality, quality, security, performance, tests, and maintainability. Use for PRs, audits, team standards, and developer training.
Enforces A/B test setup with gates for hypothesis locking, metrics definition, sample size calculation, assumptions checks, and execution readiness before implementation.
Terraform's built-in testing framework validates that configuration updates don't introduce breaking changes. Tests run against temporary resources, protecting existing infrastructure and state files.
references/MOCK_PROVIDERS.md — Mock provider syntax, common defaults, when to use mocks (Terraform 1.7.0+ only — skip if the user's version is below 1.7)references/CI_CD.md — GitHub Actions and GitLab CI pipeline examplesreferences/EXAMPLES.md — Complete example test suite (unit, integration, and mock tests for a VPC module)Read the relevant reference file when the user asks about mocking, CI/CD integration, or wants a full example.
.tftest.hcl / .tftest.json): Contains run blocks that validate your configurationapply (default, creates real resources) or plan (validates logic only)my-module/
├── main.tf
├── variables.tf
├── outputs.tf
└── tests/
├── defaults_unit_test.tftest.hcl # plan mode — fast, no resources
├── validation_unit_test.tftest.hcl # plan mode
└── full_stack_integration_test.tftest.hcl # apply mode — creates real resources
Use *_unit_test.tftest.hcl for plan-mode tests and *_integration_test.tftest.hcl for apply-mode tests so they can be filtered separately in CI.
# Optional: test-wide settings
test {
parallel = true # Enable parallel execution for all run blocks (default: false)
}
# Optional: file-level variables (highest precedence, override all other sources)
variables {
aws_region = "us-west-2"
instance_type = "t2.micro"
}
# Optional: provider configuration
provider "aws" {
region = var.aws_region
}
# Required: at least one run block
run "test_default_configuration" {
command = plan
assert {
condition = aws_instance.example.instance_type == "t2.micro"
error_message = "Instance type should be t2.micro by default"
}
}
run "test_name" {
command = plan # or apply (default)
parallel = true # optional, since v1.9.0
# Override file-level variables
variables {
instance_type = "t3.large"
}
# Reference a specific module
module {
source = "./modules/vpc" # local or registry only (not git/http)
version = "5.0.0" # registry modules only
}
# Control state isolation
state_key = "shared_state" # since v1.9.0
# Plan behavior
plan_options {
mode = refresh-only # or normal (default)
refresh = true
replace = [aws_instance.example]
target = [aws_instance.example]
}
# Assertions
assert {
condition = aws_instance.example.id != ""
error_message = "Instance should have a valid ID"
}
# Expected failures (test passes if these fail)
expect_failures = [
var.instance_count
]
}
run "test_outputs" {
command = plan
assert {
condition = output.vpc_id != null
error_message = "VPC ID output must be defined"
}
assert {
condition = can(regex("^vpc-", output.vpc_id))
error_message = "VPC ID should start with 'vpc-'"
}
}
run "test_nat_gateway_disabled" {
command = plan
variables {
create_nat_gateway = false
}
assert {
condition = length(aws_nat_gateway.main) == 0
error_message = "NAT gateway should not be created when disabled"
}
}
run "test_resource_count" {
command = plan
variables {
instance_count = 3
}
assert {
condition = length(aws_instance.workers) == 3
error_message = "Should create exactly 3 worker instances"
}
}
run "test_resource_tags" {
command = plan
variables {
common_tags = {
Environment = "production"
ManagedBy = "Terraform"
}
}
assert {
condition = aws_instance.example.tags["Environment"] == "production"
error_message = "Environment tag should be set correctly"
}
assert {
condition = aws_instance.example.tags["ManagedBy"] == "Terraform"
error_message = "ManagedBy tag should be set correctly"
}
}
run "test_data_source_lookup" {
command = plan
assert {
condition = data.aws_ami.ubuntu.id != ""
error_message = "Should find a valid Ubuntu AMI"
}
assert {
condition = can(regex("^ami-", data.aws_ami.ubuntu.id))
error_message = "AMI ID should be in correct format"
}
}
run "test_invalid_environment" {
command = plan
variables {
environment = "invalid"
}
expect_failures = [
var.environment
]
}
run "setup_vpc" {
command = apply
assert {
condition = output.vpc_id != ""
error_message = "VPC should be created"
}
}
run "test_subnet_in_vpc" {
command = plan
variables {
vpc_id = run.setup_vpc.vpc_id
}
assert {
condition = aws_subnet.example.vpc_id == run.setup_vpc.vpc_id
error_message = "Subnet should be in the VPC from setup_vpc"
}
}
run "test_refresh_only" {
command = plan
plan_options {
mode = refresh-only
}
assert {
condition = aws_instance.example.tags["Environment"] == "production"
error_message = "Tags should be refreshed correctly"
}
}
run "test_specific_resource" {
command = plan
plan_options {
target = [aws_instance.example]
}
assert {
condition = aws_instance.example.instance_type == "t2.micro"
error_message = "Targeted resource should be planned"
}
}
run "test_networking_module" {
command = plan
parallel = true
module {
source = "./modules/networking"
}
assert {
condition = output.vpc_id != ""
error_message = "VPC should be created"
}
}
run "test_compute_module" {
command = plan
parallel = true
module {
source = "./modules/compute"
}
assert {
condition = output.instance_id != ""
error_message = "Instance should be created"
}
}
run "create_foundation" {
command = apply
state_key = "foundation"
assert {
condition = aws_vpc.main.id != ""
error_message = "Foundation VPC should be created"
}
}
run "create_application" {
command = apply
state_key = "foundation"
variables {
vpc_id = run.create_foundation.vpc_id
}
assert {
condition = aws_instance.app.vpc_id == run.create_foundation.vpc_id
error_message = "Application should use foundation VPC"
}
}
run "create_bucket" {
command = apply
assert {
condition = aws_s3_bucket.example.id != ""
error_message = "Bucket should be created"
}
}
run "add_objects" {
command = apply
assert {
condition = length(aws_s3_object.files) > 0
error_message = "Objects should be added"
}
}
# Cleanup destroys in reverse: objects first, then bucket
provider "aws" {
alias = "primary"
region = "us-west-2"
}
provider "aws" {
alias = "secondary"
region = "us-east-1"
}
run "test_with_specific_provider" {
command = plan
providers = {
aws = provider.aws.secondary
}
assert {
condition = aws_instance.example.availability_zone == "us-east-1a"
error_message = "Instance should be in us-east-1 region"
}
}
assert {
condition = alltrue([
for subnet in aws_subnet.private :
can(regex("^10\\.0\\.", subnet.cidr_block))
])
error_message = "All private subnets should use 10.0.0.0/8 CIDR range"
}
Resources are destroyed in reverse run block order after test completion. This matters for dependencies (e.g., S3 objects before bucket). Use terraform test -no-cleanup to skip cleanup for debugging.
terraform test # all tests
terraform test tests/defaults.tftest.hcl # specific file
terraform test -filter=test_vpc_configuration # by run block name
terraform test -test-directory=integration-tests # custom directory
terraform test -verbose # detailed output
terraform test -no-cleanup # skip resource cleanup
*_unit_test.tftest.hcl for plan mode, *_integration_test.tftest.hcl for apply modecommand = plan unless you need to test real resource behaviorreferences/MOCK_PROVIDERS.md)expect_failures to verify validation rules reject bad inputsparallel = true for independent tests with different state files-no-cleanup for debuggingreferences/CI_CD.md)| Issue | Solution |
|---|---|
| Assertion failures | Use -verbose to see actual vs expected values |
| Missing credentials | Use mock providers for unit tests |
| Unsupported module source | Convert git/HTTP sources to local modules |
| Tests interfering | Use state_key or separate modules for isolation |
| Slow tests | Use command = plan and mocks; run integration tests separately |