From atum-workflows
Terraform / OpenTofu pattern library — HCL idioms (resources, data sources, variables with validation, outputs, locals, dynamic blocks, for_each vs count, conditional resources), module design (versioned via Git tag or Terraform Registry, root vs child modules, composition over inheritance, README + variables + outputs + versions.tf structure), state management (S3 + DynamoDB lock for AWS, GCS for GCP, Azure RM for Azure, Terraform Cloud workspaces, state surgery with terraform state mv/rm/import), workspaces vs directory-per-environment trade-offs, Terragrunt for DRY multi-env wrappers, Atlantis for PR-driven workflows, OPA / Conftest / Sentinel policy-as-code, terraform-docs for auto README generation, drift detection with regular plans, refactoring with moved blocks (Terraform 1.1+) and import blocks (1.5+), and the Business Source License vs OpenTofu MPL-2.0 fork debate. Use when scaffolding new infrastructure-as-code, refactoring legacy CloudFormation / ARM templates to Terraform, designing module hierarchies for multi-team setups, or hardening an existing tf codebase. Complements the `terraform-engineer` agent in this plugin which provides architectural guidance.
npx claudepluginhub arnwaldn/atum-plugins-collection --plugin atum-workflowsThis skill uses the workspace's default tool permissions.
Ce skill couvre les **patterns concrets** pour construire un Terraform / OpenTofu codebase production-grade. Il complète l'agent `terraform-engineer` en fournissant des recettes prêtes à coller.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Ce skill couvre les patterns concrets pour construire un Terraform / OpenTofu codebase production-grade. Il complète l'agent terraform-engineer en fournissant des recettes prêtes à coller.
MCP server disponible : ce plugin déclare terraform dans .mcp.json (HashiCorp Docker MCP). Claude Code peut introspecter Terraform Cloud / Enterprise workspaces directement.
infrastructure/
├── modules/ # Modules réutilisables locaux
│ ├── vpc/
│ │ ├── main.tf
│ │ ├── variables.tf
│ │ ├── outputs.tf
│ │ ├── versions.tf
│ │ └── README.md
│ └── eks-cluster/
├── envs/
│ ├── dev/
│ │ ├── backend.tf
│ │ ├── main.tf
│ │ ├── variables.tf
│ │ └── terraform.tfvars
│ ├── staging/
│ └── prod/
└── .github/workflows/
└── terraform.yml
Règle : modules/ contient le réutilisable, envs/<env>/ est l'instance qui appelle les modules avec ses variables.
variable "environment" {
type = string
description = "Deployment environment (dev, staging, prod)"
validation {
condition = contains(["dev", "staging", "prod"], var.environment)
error_message = "environment must be one of: dev, staging, prod."
}
}
variable "instance_count" {
type = number
default = 1
description = "Number of EC2 instances to launch"
validation {
condition = var.instance_count >= 1 && var.instance_count <= 10
error_message = "instance_count must be between 1 and 10."
}
}
locals {
name_prefix = "${var.project}-${var.environment}"
common_tags = {
Project = var.project
Environment = var.environment
ManagedBy = "terraform"
Owner = var.owner
}
is_prod = var.environment == "prod"
instance_type = local.is_prod ? "m5.large" : "t3.micro"
}
resource "aws_instance" "web" {
ami = data.aws_ami.amazon_linux.id
instance_type = local.instance_type
tags = merge(local.common_tags, {
Name = "${local.name_prefix}-web"
})
}
for_each vs count# count: pour des quantités
resource "aws_instance" "workers" {
count = var.worker_count
ami = data.aws_ami.amazon_linux.id
instance_type = "t3.medium"
tags = { Name = "worker-${count.index}" }
}
# for_each: pour des sets nommés (PRÉFÉRÉ quand applicable)
resource "aws_instance" "workers" {
for_each = toset(["api", "worker", "scheduler"])
ami = data.aws_ami.amazon_linux.id
instance_type = "t3.medium"
tags = { Name = each.key }
}
# for_each avec map (encore mieux)
locals {
workers = {
api = { instance_type = "t3.medium", disk_size = 50 }
worker = { instance_type = "t3.large", disk_size = 100 }
scheduler = { instance_type = "t3.small", disk_size = 30 }
}
}
resource "aws_instance" "workers" {
for_each = local.workers
ami = data.aws_ami.amazon_linux.id
instance_type = each.value.instance_type
root_block_device {
volume_size = each.value.disk_size
}
tags = { Name = each.key }
}
Avantage for_each : ajouter/retirer un élément n'affecte pas les autres (avec count, retirer le worker [1] décale tous les suivants → recreate).
resource "aws_security_group" "web" {
name = "${local.name_prefix}-web"
dynamic "ingress" {
for_each = var.allowed_ports
content {
from_port = ingress.value.port
to_port = ingress.value.port
protocol = ingress.value.protocol
cidr_blocks = ingress.value.cidr_blocks
description = ingress.value.description
}
}
tags = local.common_tags
}
# envs/prod/backend.tf
terraform {
backend "s3" {
bucket = "company-tf-state-prod"
key = "infrastructure/main.tfstate"
region = "eu-west-3"
dynamodb_table = "terraform-locks"
encrypt = true
}
}
Setup initial du backend :
# bootstrap/main.tf — à apply UNE FOIS avant tout, avec un state local
resource "aws_s3_bucket" "tf_state" {
bucket = "company-tf-state-prod"
}
resource "aws_s3_bucket_versioning" "tf_state" {
bucket = aws_s3_bucket.tf_state.id
versioning_configuration { status = "Enabled" }
}
resource "aws_s3_bucket_server_side_encryption_configuration" "tf_state" {
bucket = aws_s3_bucket.tf_state.id
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
resource "aws_dynamodb_table" "tf_locks" {
name = "terraform-locks"
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
}
terraform {
backend "gcs" {
bucket = "company-tf-state-prod"
prefix = "infrastructure"
}
}
GCS a un locking natif via Cloud Storage Object Locking — pas besoin de DynamoDB equivalent.
modules/vpc/
├── main.tf # Resources principales
├── variables.tf # Inputs
├── outputs.tf # Outputs
├── versions.tf # required_providers + required_version
└── README.md # Doc auto-générée par terraform-docs
terraform {
required_version = ">= 1.5.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
# envs/prod/main.tf
module "vpc" {
source = "git::https://github.com/myorg/terraform-modules.git//vpc?ref=v1.2.3"
cidr_block = "10.0.0.0/16"
azs = ["eu-west-3a", "eu-west-3b", "eu-west-3c"]
tags = local.common_tags
}
Toujours pinner par tag (?ref=v1.2.3), jamais ?ref=main qui peut casser.
# modules/vpc/outputs.tf
output "vpc_id" {
value = aws_vpc.this.id
description = "ID of the VPC"
}
output "private_subnet_ids" {
value = aws_subnet.private[*].id
description = "IDs of the private subnets"
}
output "public_subnet_ids" {
value = aws_subnet.public[*].id
description = "IDs of the public subnets"
}
terraform workspace new dev
terraform workspace new prod
terraform workspace select prod
terraform apply
Problèmes :
workspace select)if workspace == "prod" partout)envs/
├── dev/
├── staging/
└── prod/
Chaque dir a son propre backend.tf, ses propres variables, son propre state.
Avantages :
Inconvénient : duplication de code → mitigée par Terragrunt.
# terragrunt.hcl (root)
remote_state {
backend = "s3"
config = {
bucket = "company-tf-state-${path_relative_to_include()}"
key = "${path_relative_to_include()}/terraform.tfstate"
region = "eu-west-3"
encrypt = true
dynamodb_table = "terraform-locks"
}
}
# live/prod/vpc/terragrunt.hcl
include "root" { path = find_in_parent_folders() }
terraform {
source = "git::https://github.com/myorg/terraform-modules.git//vpc?ref=v1.2.3"
}
inputs = {
cidr_block = "10.0.0.0/16"
environment = "prod"
}
cd live/prod/vpc
terragrunt apply
moved blocks (Terraform 1.1+)moved {
from = aws_instance.web
to = aws_instance.web_server
}
resource "aws_instance" "web_server" {
# ...
}
terraform plan voit le rename comme un no-op (pas de destroy/create).
import blocks (Terraform 1.5+)import {
to = aws_s3_bucket.legacy
id = "my-existing-bucket"
}
resource "aws_s3_bucket" "legacy" {
bucket = "my-existing-bucket"
}
Workflow :
import block + une définition resource minimaleterraform plan → voit les diffs entre la config et la réalitéterraform apply → l'import est enregistré# policies/no_public_s3.rego
package terraform
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_s3_bucket"
resource.change.after.acl == "public-read"
msg := sprintf("S3 bucket '%s' must not be public-read", [resource.address])
}
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_s3_bucket"
not resource.change.after.server_side_encryption_configuration
msg := sprintf("S3 bucket '%s' must have encryption enabled", [resource.address])
}
terraform plan -out=tfplan.binary
terraform show -json tfplan.binary > tfplan.json
conftest test tfplan.json -p policies/
name: Terraform
on:
pull_request:
paths: ['envs/**', 'modules/**']
permissions:
id-token: write
contents: read
pull-requests: write
jobs:
plan:
runs-on: ubuntu-latest
strategy:
matrix:
env: [dev, staging, prod]
steps:
- uses: actions/checkout@v4
- uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: arn:aws:iam::123456789012:role/terraform-${{ matrix.env }}
aws-region: eu-west-3
- uses: hashicorp/setup-terraform@v3
with:
terraform_version: 1.7.0
- run: terraform fmt -check -recursive
- run: terraform init
working-directory: envs/${{ matrix.env }}
- run: terraform validate
working-directory: envs/${{ matrix.env }}
- run: terraform plan -out=tfplan
working-directory: envs/${{ matrix.env }}
- name: Conftest
run: conftest test envs/${{ matrix.env }}/tfplan -p policies/
- name: Comment plan on PR
uses: actions/github-script@v7
# ... post plan output as PR comment
brew install terraform-docs
terraform-docs markdown table modules/vpc/ > modules/vpc/README.md
À ajouter en pre-commit hook :
# .pre-commit-config.yaml
repos:
- repo: https://github.com/terraform-docs/terraform-docs
rev: v0.17.0
hooks:
- id: terraform-docs-go
args: ["markdown", "table", "--output-file", "README.md", "./modules"]
terraform apply sans plan — changements non-revus?ref=main) — casse aléatoireforce_destroy = true sur S3/RDS — perte de données possiblecloud-architecture (ce plugin)kubernetes-patterns (ce plugin)terraform-engineer (ce plugin)ci-cd-engineer (atum-stack-backend)