From atum-workflows
AWS cloud deployment pattern library — leverages 3 official AWS Labs MCP servers (awsknowledge HTTP for live AWS docs, awspricing for cost analysis, aws-iac-mcp for CloudFormation/CDK/Terraform IaC generation) to cover EC2, ECS / Fargate, EKS, Lambda, API Gateway, ALB / NLB, S3, CloudFront + Route53, RDS (MySQL / Postgres / MariaDB / Aurora / Aurora Serverless v2), DynamoDB, ElastiCache (Redis / Memcached), SQS / SNS / EventBridge, Step Functions, AWS Amplify (full-stack with hosting + auth + data + storage), Amazon Cognito, AWS Secrets Manager + Systems Manager Parameter Store, AWS WAF + Shield, AWS Backup + AWS DRS, IAM + IAM Identity Center (SSO), AWS Organizations + Control Tower, CloudWatch + X-Ray + Container Insights, AWS Budgets + Cost Explorer + Compute Optimizer, AWS Migration Hub for GCP/Azure→AWS migration, and SageMaker for ML workloads. Use when deploying any application to AWS, migrating from on-prem or other clouds to AWS, hardening an existing AWS account, or auditing AWS costs. References the `awsknowledge`, `awspricing`, `awsiac` MCP servers declared in this plugin's .mcp.json — Claude Code can directly query AWS docs, pricing, and generate IaC at runtime via official AWS Labs MCPs. Differentiates from generic terraform-patterns by AWS-specific service patterns and the official AWS Labs tooling integration.
npx claudepluginhub arnwaldn/atum-plugins-collection --plugin atum-workflowsThis skill uses the workspace's default tool permissions.
Ce skill couvre les patterns canoniques pour déployer sur **AWS** en s'appuyant sur les **3 MCP servers officiels AWS Labs** déclarés dans `plugins/atum-workflows/.mcp.json`.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Executes pre-written implementation plans: critically reviews, follows bite-sized steps exactly, runs verifications, tracks progress with checkpoints, uses git worktrees, stops on blockers.
Ce skill couvre les patterns canoniques pour déployer sur AWS en s'appuyant sur les 3 MCP servers officiels AWS Labs déclarés dans plugins/atum-workflows/.mcp.json.
MCP servers AWS disponibles :
awsknowledge (HTTP https://knowledge-mcp.global.api.aws) — accès live à toute la doc AWS officielleawspricing (uvx) — données de pricing temps-réel pour tous les services AWSawsiac (uvx) — génération IaC (CloudFormation, CDK, Terraform) avec best practicesPrérequis utilisateur : uvx installé (Python pip install uv) + AWS CLI v2 configuré (aws configure) + IAM credentials valides.
Type de workload
├── Web app full-stack (Next.js, React, Vue, Angular)
│ ├── Static + serverless API → AWS Amplify Hosting + Amplify backend
│ ├── Containerized → ECS Fargate + ALB
│ └── Server-rendered + scaling → Elastic Beanstalk (legacy) ou App Runner (moderne)
├── API REST / GraphQL / Webhooks
│ ├── Serverless → API Gateway + Lambda
│ ├── Containers → ECS Fargate + ALB ou App Runner
│ └── Performance critique → EC2 + ALB (control total)
├── Static website
│ └── S3 + CloudFront (avec OAC, pas OAI legacy)
├── Background jobs / queues
│ ├── Lambda + SQS (event-driven)
│ ├── Lambda + EventBridge Scheduler (cron)
│ ├── ECS Fargate scheduled tasks
│ └── Step Functions pour workflows complexes
├── Database
│ ├── Postgres / MySQL / MariaDB → RDS
│ ├── Postgres serverless → Aurora Serverless v2
│ ├── NoSQL key-value massif → DynamoDB
│ ├── Cache → ElastiCache (Redis cluster mode enabled)
│ └── Time-series → Timestream
├── Storage
│ ├── Objects → S3 (Intelligent-Tiering par défaut)
│ ├── Files NFS → EFS (Elastic File System)
│ └── Archive → S3 Glacier Deep Archive
├── ML / AI
│ ├── Training + inference → SageMaker
│ ├── Foundation models hosted → Amazon Bedrock
│ └── Pre-trained APIs → Rekognition / Comprehend / Transcribe / Polly
├── Networking
│ ├── DNS → Route 53
│ ├── CDN → CloudFront
│ ├── VPC peering multi-account → Transit Gateway
│ └── On-prem ↔ AWS → Direct Connect ou Site-to-Site VPN
└── Migration
└── AWS Migration Hub + Application Migration Service (MGN)
# .github/workflows/deploy-aws.yml
permissions:
id-token: write # OIDC
contents: read
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: arn:aws:iam::123456789012:role/github-deploy-role
aws-region: eu-west-3
- run: aws s3 sync ./dist s3://my-bucket/
- run: aws cloudfront create-invalidation --distribution-id E123456 --paths "/*"
Setup côté AWS :
token.actions.githubusercontent.comAWS_ACCESS_KEY_ID dans GitHub Secrets// handler.ts
import { APIGatewayProxyHandler } from 'aws-lambda'
export const handler: APIGatewayProxyHandler = async (event) => {
const name = event.queryStringParameters?.name || 'world'
return {
statusCode: 200,
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ message: `Hello ${name}` }),
}
}
# AWS SAM template.yaml
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Resources:
HelloFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: ./
Handler: handler.handler
Runtime: nodejs20.x
MemorySize: 256
Timeout: 10
Architectures: [arm64] # ARM = -20% cost
Environment:
Variables:
NODE_ENV: production
Events:
ApiEvent:
Type: HttpApi # API Gateway v2 (HTTP API, moins cher que REST)
Properties:
Path: /hello
Method: get
sam build && sam deploy --guided
# 1. Push image vers ECR
aws ecr create-repository --repository-name my-api
docker tag my-api:latest 123456789012.dkr.ecr.eu-west-3.amazonaws.com/my-api:1.0.0
aws ecr get-login-password | docker login --username AWS --password-stdin 123456789012.dkr.ecr.eu-west-3.amazonaws.com
docker push 123456789012.dkr.ecr.eu-west-3.amazonaws.com/my-api:1.0.0
# 2. Task definition + service via CDK / Terraform / Console
// Avec AWS CDK (TypeScript)
import { Stack, StackProps } from 'aws-cdk-lib'
import { Construct } from 'constructs'
import * as ecs from 'aws-cdk-lib/aws-ecs'
import * as ec2 from 'aws-cdk-lib/aws-ec2'
import * as ecs_patterns from 'aws-cdk-lib/aws-ecs-patterns'
export class ApiStack extends Stack {
constructor(scope: Construct, id: string, props?: StackProps) {
super(scope, id, props)
const vpc = new ec2.Vpc(this, 'Vpc', { maxAzs: 3, natGateways: 1 })
const cluster = new ecs.Cluster(this, 'Cluster', { vpc })
new ecs_patterns.ApplicationLoadBalancedFargateService(this, 'Api', {
cluster,
cpu: 512,
memoryLimitMiB: 1024,
desiredCount: 2,
taskImageOptions: {
image: ecs.ContainerImage.fromRegistry('123456789012.dkr.ecr.eu-west-3.amazonaws.com/my-api:1.0.0'),
containerPort: 8080,
},
publicLoadBalancer: true,
})
}
}
npx cdk deploy
// CDK
import * as rds from 'aws-cdk-lib/aws-rds'
const cluster = new rds.DatabaseCluster(this, 'Db', {
engine: rds.DatabaseClusterEngine.auroraPostgres({
version: rds.AuroraPostgresEngineVersion.VER_15_5,
}),
serverlessV2MinCapacity: 0.5,
serverlessV2MaxCapacity: 4,
writer: rds.ClusterInstance.serverlessV2('writer'),
readers: [rds.ClusterInstance.serverlessV2('reader1', { scaleWithWriter: true })],
vpc,
vpcSubnets: { subnetType: ec2.SubnetType.PRIVATE_ISOLATED },
storageEncrypted: true,
backupRetention: cdk.Duration.days(14),
cloudwatchLogsExports: ['postgresql'],
performanceInsightRetention: rds.PerformanceInsightRetention.LONG_TERM,
})
Best practices :
storageEncrypted: trueenableCloudwatchLogsExports: ['postgresql']import * as s3 from 'aws-cdk-lib/aws-s3'
import * as cf from 'aws-cdk-lib/aws-cloudfront'
import * as origins from 'aws-cdk-lib/aws-cloudfront-origins'
const bucket = new s3.Bucket(this, 'SiteBucket', {
publicReadAccess: false,
blockPublicAccess: s3.BlockPublicAccess.BLOCK_ALL,
encryption: s3.BucketEncryption.S3_MANAGED,
enforceSSL: true,
versioned: true,
})
const distribution = new cf.Distribution(this, 'SiteDistribution', {
defaultBehavior: {
origin: origins.S3BucketOrigin.withOriginAccessControl(bucket), // OAC, pas OAI
viewerProtocolPolicy: cf.ViewerProtocolPolicy.REDIRECT_TO_HTTPS,
cachePolicy: cf.CachePolicy.CACHING_OPTIMIZED,
},
defaultRootObject: 'index.html',
errorResponses: [
{ httpStatus: 404, responseHttpStatus: 200, responsePagePath: '/index.html' }, // SPA
],
})
const table = new dynamodb.Table(this, 'AppTable', {
partitionKey: { name: 'PK', type: dynamodb.AttributeType.STRING },
sortKey: { name: 'SK', type: dynamodb.AttributeType.STRING },
billingMode: dynamodb.BillingMode.PAY_PER_REQUEST,
pointInTimeRecovery: true,
encryption: dynamodb.TableEncryption.AWS_MANAGED,
removalPolicy: cdk.RemovalPolicy.RETAIN,
})
// GSIs
table.addGlobalSecondaryIndex({
indexName: 'GSI1',
partitionKey: { name: 'GSI1PK', type: dynamodb.AttributeType.STRING },
sortKey: { name: 'GSI1SK', type: dynamodb.AttributeType.STRING },
})
Single-table design : un seul DynamoDB pour toute l'app, avec PK/SK polymorphes. Pattern documenté par Rick Houlihan (AWS).
// Endpoint pour inference
const model = new sagemaker.CfnModel(this, 'Model', {
executionRoleArn: role.roleArn,
primaryContainer: {
image: '763104351884.dkr.ecr.eu-west-3.amazonaws.com/pytorch-inference:2.1.0-gpu-py310',
modelDataUrl: 's3://my-models/model.tar.gz',
},
})
const endpointConfig = new sagemaker.CfnEndpointConfig(this, 'EndpointConfig', {
productionVariants: [{
modelName: model.attrModelName,
variantName: 'AllTraffic',
instanceType: 'ml.g5.xlarge',
initialInstanceCount: 1,
}],
})
new sagemaker.CfnEndpoint(this, 'Endpoint', {
endpointConfigName: endpointConfig.attrEndpointConfigName,
})
Cost : SageMaker peut être coûteux. Préférer Bedrock pour les LLM hosted, SageMaker uniquement pour custom training.
Workflow recommandé :
awsknowledge MCP guide la découverte des ressources sources (GCP/Azure)awsiac MCP génère les équivalents IaC AWSawspricing MCP estime le TCO post-migrationAdministratorAccess sur les rôles → least privilegekubernetes-patterns (ce plugin)terraform-patterns (ce plugin)cloud-architecture (ce plugin)security-expert (atum-compliance)penetration-tester (atum-compliance)