Infrastructure as Code with Terraform — project structure, remote state, modules, workspace strategy, AWS/GCP patterns, CI/CD integration, and security hardening. The standard for managing production infrastructure.
From clarcnpx claudepluginhub marvinrichter/clarc --plugin clarcThis skill uses the workspace's default tool permissions.
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Enables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Infrastructure as Code: describe desired state → Terraform plans the diff → apply. Never manually change infrastructure that Terraform manages.
terraform plan that shows unexpected resource replacements or driftinfrastructure/
├── environments/
│ ├── dev/
│ │ ├── main.tf # environment-specific resource composition
│ │ ├── variables.tf
│ │ ├── terraform.tfvars # non-secret values for dev
│ │ └── outputs.tf
│ ├── staging/
│ └── prod/
├── modules/
│ ├── vpc/ # reusable VPC module
│ ├── ecs-service/ # reusable ECS service module
│ ├── rds-postgres/ # reusable RDS module
│ └── cdn/ # CloudFront + S3 origin
└── shared/
└── backend.tf # shared state backend config
Rule: Every environment is an isolated Terraform root. modules/ are reusable, parameterized building blocks. Never put resources directly in modules/ — only in environments/.
State must be remote + locked. Local state in git = team conflicts and drift.
# infrastructure/environments/prod/backend.tf
terraform {
backend "s3" {
bucket = "my-company-terraform-state"
key = "prod/terraform.tfstate"
region = "eu-west-1"
encrypt = true
dynamodb_table = "terraform-state-lock" # prevents concurrent applies
}
}
# infrastructure/bootstrap/main.tf — apply this FIRST, before anything else
resource "aws_s3_bucket" "tf_state" {
bucket = "my-company-terraform-state"
}
resource "aws_s3_bucket_versioning" "tf_state" {
bucket = aws_s3_bucket.tf_state.id
versioning_configuration { status = "Enabled" }
}
resource "aws_s3_bucket_server_side_encryption_configuration" "tf_state" {
bucket = aws_s3_bucket.tf_state.id
rule {
apply_server_side_encryption_by_default { sse_algorithm = "AES256" }
}
}
resource "aws_dynamodb_table" "tf_lock" {
name = "terraform-state-lock"
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
}
# modules/rds-postgres/main.tf
variable "identifier" { type = string }
variable "instance_class" { type = string; default = "db.t4g.micro" }
variable "allocated_storage" { type = number; default = 20 }
variable "db_name" { type = string }
variable "username" { type = string }
variable "password" { type = string; sensitive = true }
variable "subnet_ids" { type = list(string) }
variable "vpc_id" { type = string }
variable "ingress_sg_ids" { type = list(string) }
variable "environment" { type = string }
resource "aws_db_subnet_group" "this" {
name = "${var.identifier}-subnet-group"
subnet_ids = var.subnet_ids
}
resource "aws_security_group" "rds" {
name = "${var.identifier}-rds-sg"
vpc_id = var.vpc_id
ingress {
from_port = 5432
to_port = 5432
protocol = "tcp"
security_groups = var.ingress_sg_ids
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_db_instance" "this" {
identifier = var.identifier
engine = "postgres"
engine_version = "16"
instance_class = var.instance_class
allocated_storage = var.allocated_storage
storage_encrypted = true # always encrypt
db_name = var.db_name
username = var.username
password = var.password
db_subnet_group_name = aws_db_subnet_group.this.name
vpc_security_group_ids = [aws_security_group.rds.id]
backup_retention_period = 7 # 7-day PITR
deletion_protection = var.environment == "prod"
skip_final_snapshot = var.environment != "prod"
multi_az = var.environment == "prod"
publicly_accessible = false # never public
tags = {
Environment = var.environment
ManagedBy = "terraform"
}
}
output "endpoint" { value = aws_db_instance.this.endpoint }
output "sg_id" { value = aws_security_group.rds.id }
# environments/prod/main.tf
module "postgres" {
source = "../../modules/rds-postgres"
identifier = "myapp-prod"
instance_class = "db.t4g.medium"
db_name = "myapp"
username = "myapp"
password = var.db_password # from tfvars or secret manager
subnet_ids = module.vpc.private_subnet_ids
vpc_id = module.vpc.vpc_id
ingress_sg_ids = [module.api.security_group_id]
environment = "prod"
}
# variables.tf
variable "environment" { type = string }
variable "region" { type = string; default = "eu-west-1" }
variable "db_password" {
type = string
sensitive = true # never shown in plan output or logs
}
# terraform.tfvars (commit this — non-secret values only)
environment = "prod"
region = "eu-west-1"
Secrets strategy:
# Option A: Pass at apply time (CI/CD)
terraform apply -var="db_password=$DB_PASSWORD"
# Option B: AWS Secrets Manager — fetch in Terraform
data "aws_secretsmanager_secret_version" "db" {
secret_id = "myapp/prod/db-password"
}
# Then: jsondecode(data.aws_secretsmanager_secret_version.db.secret_string)["password"]
# Option C: Vault provider for HashiCorp Vault
NEVER put secrets in terraform.tfvars or hardcode them in .tf files.
# Workspaces are for lightweight environment separation
# within the SAME backend config (same bucket, different state key)
terraform workspace new dev
terraform workspace new staging
terraform workspace new prod
terraform workspace select prod
# Reference current workspace in resources
resource "aws_s3_bucket" "uploads" {
bucket = "myapp-${terraform.workspace}-uploads"
}
When to use workspaces vs. separate directories:
# modules/ecs-service/main.tf
resource "aws_ecs_task_definition" "this" {
family = var.service_name
requires_compatibilities = ["FARGATE"]
network_mode = "awsvpc"
cpu = var.cpu
memory = var.memory
execution_role_arn = aws_iam_role.execution.arn
task_role_arn = aws_iam_role.task.arn
container_definitions = jsonencode([{
name = var.service_name
image = var.container_image
essential = true
portMappings = [{ containerPort = var.port, protocol = "tcp" }]
environment = [for k, v in var.env_vars : { name = k, value = v }]
secrets = [for k, v in var.secrets : { name = k, valueFrom = v }]
logConfiguration = {
logDriver = "awslogs"
options = {
"awslogs-group" = "/ecs/${var.service_name}"
"awslogs-region" = data.aws_region.current.name
"awslogs-stream-prefix" = "ecs"
}
}
}])
}
resource "aws_ecs_service" "this" {
name = var.service_name
cluster = var.cluster_arn
task_definition = aws_ecs_task_definition.this.arn
desired_count = var.desired_count
launch_type = "FARGATE"
network_configuration {
subnets = var.private_subnet_ids
security_groups = [aws_security_group.service.id]
assign_public_ip = false
}
load_balancer {
target_group_arn = aws_lb_target_group.this.arn
container_name = var.service_name
container_port = var.port
}
deployment_circuit_breaker {
enable = true
rollback = true # auto-rollback on failed deployment
}
lifecycle {
ignore_changes = [desired_count] # let autoscaling manage count
}
}
# Task role: what the container can DO
resource "aws_iam_role_policy" "task" {
name = "${var.service_name}-task-policy"
role = aws_iam_role.task.id
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Action = ["s3:GetObject", "s3:PutObject"]
Resource = "${var.uploads_bucket_arn}/*"
},
{
Effect = "Allow"
Action = ["secretsmanager:GetSecretValue"]
Resource = var.db_secret_arn
}
]
})
}
# Never use AdministratorAccess or * resources on task roles
For CI/CD integration,
terraform import, common commands, anti-patterns, and IaC tool selection — see skillterraform-ci.