Audits AWS AI/GenAI architectures against the Well-Architected GenAI Lens — operational excellence, security, reliability, performance, cost optimization, and sustainability. This skill should be used when the user asks to "audit AWS AI architecture", "review Bedrock configuration", "assess SageMaker security", "optimize AWS AI costs", "evaluate AWS GenAI compliance", "review AWS Well-Architected for AI", or mentions AWS AI audit, Bedrock audit, SageMaker review, AWS GenAI security assessment, or AWS AI cost optimization review.
From maonpx claudepluginhub javimontano/mao-discovery-frameworkThis skill is limited to using the following tools:
references/aws-audit-checks.mdreferences/aws-cost-audit.mdreferences/aws-security-audit.mdEnables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Auditar arquitecturas AWS para workloads de AI/GenAI contra el Well-Architected Framework GenAI Lens, evaluando los 6 pilares (GENOPS, GENSEC, GENREL, GENPERF, GENCOST, GENSUS) con checks automatizables, detección de waste, auditoría de seguridad AWS-específica, y compliance mapping por regulación.
Well-Architected Lens es el estándar, no una guía. Los checks del GenAI Lens no son sugerencias — son el baseline mínimo para workloads AI en producción. Un sistema que falla en GENSEC (security) o GENREL (reliability) no está listo para producción independientemente del rendimiento del modelo.
Cost waste is a finding, not an optimization. En auditoría, el waste no es "una oportunidad de mejora" — es un hallazgo con severidad. Un endpoint SageMaker idle 24/7 con tráfico esporádico es un finding HIGH, no un "nice to have".
Automate the audit, not just the fix. Cada check debe ser repetible y automatizable (AWS Config rules, CloudWatch alarms, custom Lambda checks). Una auditoría que depende de revisión manual no escala.
Parámetros:
MODO: [express | standard | deep]
FORMATO: [ejecutivo | técnico | híbrido]
VARIANTE: [security | cost | reliability | performance | full]
SCOPE: [bedrock | sagemaker | opensearch | full-stack]
Detección automática:
- Si existe CDK/CloudFormation con Bedrock → SCOPE incluye bedrock
- Si existe SageMaker endpoints → SCOPE incluye sagemaker
- Si el input menciona "costo" o "billing" → VARIANTE=cost
- Si el input menciona "seguridad" o "compliance" → VARIANTE=security
- Default: MODO=standard, VARIANTE=full, SCOPE=full-stack
Evalúa el workload contra los 6 pilares del GenAI Lens con checks específicos de AWS.
Load references:
Read ${CLAUDE_SKILL_DIR}/references/aws-audit-checks.md
Entregable: Scorecard por pilar con findings.
| Pilar | Checks | Pass | Fail | N/A | Score |
|---|---|---|---|---|---|
| GENOPS (Operational Excellence) | 7 | [n] | [n] | [n] | [%] |
| GENSEC (Security) | 10 | [n] | [n] | [n] | [%] |
| GENREL (Reliability) | 8 | [n] | [n] | [n] | [%] |
| GENPERF (Performance) | 6 | [n] | [n] | [n] | [%] |
| GENCOST (Cost) | 8 | [n] | [n] | [n] | [%] |
| GENSUS (Sustainability) | 3 | [n] | [n] | [n] | [%] |
Para cada check fallido: evidencia específica de AWS (console screenshot, CLI output, config file), severidad, y remediación con servicio AWS concreto.
Audita la configuración específica de cada servicio AWS utilizado para AI.
Bedrock:
SageMaker:
OpenSearch Serverless:
Otros servicios:
Entregable: Configuration matrix (service × setting × current × recommended × gap).
Audita la postura de seguridad completa del stack AI en AWS.
Load references:
Read ${CLAUDE_SKILL_DIR}/references/aws-security-audit.md
OWASP LLM Top 10 — AWS Controls:
IAM Audit:
* en resources o actions para AI services?Network Security:
Data Protection:
Compliance Mapping:
Entregable: Security controls matrix, IAM analysis, compliance gap report.
Detecta cost waste y oportunidades de optimización en servicios AI de AWS.
Load references:
Read ${CLAUDE_SKILL_DIR}/references/aws-cost-audit.md
Waste Detection:
Cost Attribution:
Optimization Opportunities:
Entregable: Waste inventory con savings estimados, optimization roadmap priorizado.
Evalúa disponibilidad, rendimiento, y escalabilidad del stack AI.
Reliability checks:
Performance checks:
Load testing evidence:
Entregable: Reliability scorecard, performance baselines vs. actuals, scaling assessment.
Transforma findings en acciones AWS concretas priorizadas.
Para cada finding:
Roadmap phases:
Entregable: AWS-specific remediation roadmap con Gantt, dependency graph, y estimated savings.
| Audit Mode | Checks | Depth | Effort | When to Use |
|---|---|---|---|---|
| Express | GENSEC + GENCOST only | Config review | 1-2 días | Quick security/cost check |
| Standard | All 6 pillars | Config + metrics | 3-5 días | Quarterly review, pre-scaling |
| Deep | All 6 pillars + code + IaC review | Full audit | 5-10 días | Pre-compliance, post-incident |
Cuenta multi-service (AI + non-AI): Focus en recursos taggeados como AI. Si no hay tags, el primer finding es "implementar tagging strategy para AI resources".
Multi-account architecture: Auditar cada cuenta con workloads AI. Verificar cross-account IAM roles, shared services (VPC, KMS), y Service Control Policies en Organizations.
Bedrock-only (no SageMaker): Simplificar audit scope. Focus en Guardrails, IAM, cost (model selection), y Knowledge Bases. SageMaker-specific checks se marcan N/A.
Regulatory pre-audit: Aumentar profundidad de S3 (security) y S5 (reliability). Producir evidencia en formato auditable. Mapear cada check contra el framework regulatorio específico (HIPAA, PCI-DSS).
Post-incident audit: Priorizar cadena causal del incidente. Si fue cost overrun → S4 primero. Si fue security breach → S3 primero. Si fue outage → S5 primero.
* en AI services)| Skill | Relación |
|---|---|
aws-architecture-design | Diseño AWS contra el que se audita |
aws-architecture-implementation | Recibe roadmap de remediación para implementar |
ai-architecture-audit | Auditoría general (cloud-agnostic) — complementaria |
ai-software-architecture | Modelo de 6 capas como referencia estructural |
ai-design-patterns | Patrones esperados en la arquitectura |
genai-architecture | Patrones GenAI esperados (RAG, agents, guardrails) |
security-architecture | Framework de seguridad general |
finops | FinOps general complementario al cost audit |
compliance-assessment | Assessment de compliance general complementario |
if FORMATO == "ejecutivo":
Scorecard visual (6 pilares) + top 10 findings + cost savings summary + roadmap
Audiencia: CTO, CISO, CFO
if FORMATO == "técnico":
Full 6-section audit + all checks + AWS-specific remediation
Audiencia: Cloud architects, DevOps, security engineers
if FORMATO == "híbrido":
Executive scorecard + technical deep-dive completo
Audiencia: Technical leads reporting to C-Level
## {System Name} — AWS AI Architecture Audit Report
### Executive Summary
[6-pillar scorecard, top critical findings, estimated cost savings, compliance status]
### S1: Well-Architected GenAI Assessment [Score: X%]
[Pillar-by-pillar results with check details]
### S2: AWS Service Configuration [Findings: N]
[Service × setting × current × recommended matrix]
### S3: Security Posture [Score: X/5]
[OWASP LLM mapping, IAM analysis, network security, data protection]
### S4: Cost Optimization [Savings: $XX,XXX/month est.]
[Waste inventory, optimization opportunities, FinOps recommendations]
### S5: Reliability & Performance [Score: X/5]
[Availability assessment, performance baselines, scaling analysis]
### S6: AWS Remediation Roadmap
[Phased roadmap with AWS-specific actions, dependencies, estimated effort]
### Appendix: Evidence Log
[All checks with AWS-specific evidence]