Strategic quality engineering framework covering test strategy, automation architecture, quality gates, metrics, and shift-left practices. Use when the user asks to "design test strategy", "plan quality gates", "set up test automation", "assess quality maturity", "define quality metrics", or mentions "test pyramid", "shift-left", "CI/CD quality", "automation architecture", "quality engineering".
From pmnpx claudepluginhub javimontano/mao-pm-apexThis skill is limited to using the following tools:
examples/README.mdexamples/sample-output.htmlexamples/sample-output.mdprompts/metaprompts.mdprompts/use-case-prompts.mdreferences/body-of-knowledge.mdreferences/knowledge-graph.mmdreferences/quality-patterns.mdreferences/state-of-the-art.mdSearches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Provides Nuxt 4 patterns for hydration safety, SSR-safe data fetching with useFetch/useAsyncData, route rules for prerender/SWR/ISR, lazy loading, and performance optimization.
Strategic quality engineering framework. Designs the system — QA teams execute it. For architects, engineering leads, and quality strategists who define how quality works.
La calidad no se inspecciona — se construye en cada commit. La calidad es un atributo arquitectónico, no una fase del ciclo de vida. Se diseña en la estructura del código, se automatiza en el pipeline, y se mide con indicadores adelantados — no con bugs en producción.
The user provides a system or project name as $ARGUMENTS. Parse $1 as the system/project name used throughout all output artifacts.
Parameters:
{MODO}: piloto-auto (default) | desatendido | supervisado | paso-a-paso
{FORMATO}: markdown (default) | html | dual{VARIANTE}: ejecutiva (~40% — S1 maturity + S4 gates + S5 metrics) | técnica (full 6 sections, default)Before generating framework, detect the codebase context:
!find . -name "*.test.*" -o -name "*.spec.*" -o -name "*test*" -type d -o -name "jest*" -o -name "pytest*" | head -20
Use detected testing frameworks, languages, and existing test structure to tailor recommendations.
If reference materials exist, load them:
Read ${CLAUDE_SKILL_DIR}/references/quality-patterns.md
| Level | Name | Characteristics |
|---|---|---|
| 1 | Ad-Hoc | No formal strategy. Reactive testing. No automation. |
| 2 | Repeatable | Basic processes. Some automation. Pre-release testing. |
| 3 | Defined | Formal strategy. Automation architecture. Quality gates in CI. |
| 4 | Managed | Automated gates enforced. Metrics-driven decisions. Shift-left. |
| 5 | Optimizing | Continuous improvement. Predictive analytics. Zero-toil automation. |
Output: Current score (1-5) overall + per dimension, target maturity (12-month), gap analysis, DORA benchmark comparison, quick wins + long-term improvements.
Monolith → Test Pyramid: Unit 55% | Integration 25% | API 15% | E2E 5% Microservices → Test Diamond: Unit 20% | Integration 40% | Contract 30% | E2E 10%
| Type | Owner | Frequency | Pass Criteria |
|---|---|---|---|
| Unit | Developer | Every commit | 100% scope; <1s/test |
| Integration | Developer | Every PR | Happy + unhappy paths; <3s/test |
| Contract | Both teams | Every PR | Consumer expectations = Provider responses |
| API | QA/Automation | PR + nightly | HTTP status, schema, business logic |
| E2E | QA/Automation | Nightly | Workflow completes; no UI glitches |
| Performance | Perf Eng | Weekly + pre-release | Throughput/latency meets SLA |
| Security | Security Eng | SAST per commit, DAST weekly | No critical vulns; compliance pass |
| Exploratory | QA | Per sprint | Novel bugs found; readiness confirmed |
For framework recommendations by language and automation patterns (Page Object, Screenplay, Test Data Factory, Golden Master, Testcontainers), read: ${CLAUDE_SKILL_DIR}/references/automation-patterns.md
Evaluate: language alignment, team skills, community support, maintenance cost, scalability, reporting, cost (OSS vs commercial).
| Stage | Tests | Timeout | On Failure |
|---|---|---|---|
| Commit Gate (every push) | Unit + lint + SAST | 5 min | Block merge |
| PR Gate (PR create/update) | Integration + contract + coverage >70% | 15 min | Block merge to main |
| Nightly Gate (post-merge) | Full E2E + API regression + DAST + perf baseline | 60 min | Alert team; manual review |
| Release Gate (pre-release) | Full load test (10x peak) + smoke + manual sign-off | 120 min | Block release |
| Production Gate (post-deploy) | Smoke + canary metrics validation | 15 min | Automated rollback |
For detailed pipeline YAML examples and report/dashboard architecture, read: ${CLAUDE_SKILL_DIR}/references/pipeline-stages.md
| Gate | Pass Criteria | Escalation |
|---|---|---|
| Commit | All tests pass; no lint errors; no critical vulns | Developer fixes locally |
| PR | All tests pass; coverage >70%; no regressions | Tech lead reviews |
| Nightly | E2E pass; perf regression <5%; DAST reviewed | QA lead investigates |
| Release | Load SLA met; E2E pass; security sign-off; checklist complete | Release manager + eng leads |
| Production | Smoke pass; canary metrics within 2-sigma of baseline | On-call engineer |
| Metric | Target |
|---|---|
| Code review catch rate | >50% of issues |
| Test coverage | 70-80% |
| PR review time | <24 hours |
| Build stability | >95% pass |
| Flaky test rate | <2% |
| PR gate execution time | <15 min |
| Deployment frequency | >1/week |
| Metric | Target |
|---|---|
| Production incidents | <1/week |
| Escaped defects | <5% of total bugs |
| MTTR | <1 hour (critical) |
| Regression rate | <1% |
4 panels: Test Health (pass/fail, execution time, flaky list, coverage trend), Quality Metrics (DORA, incidents, escaped defects), Automation Coverage (by type and team), SLA Compliance (build stability, PR pass rate, deploy success).
| Phase | Weeks | Focus | Key Deliverables | Success Criteria |
|---|---|---|---|---|
| Foundations | 1-4 | Baseline CI/CD + strategy | Commit gate, test strategy doc, framework selection, metrics dashboard | 90%+ build pass; strategy approved |
| Automation | 5-8 | PR gate + test coverage | Integration + contract tests, API suite, test data factory | PR gate >90% pass; API coverage >80% |
| Advanced | 9-12 | E2E + perf + security | E2E suite, perf regression tests, SAST/DAST integration | E2E >70% critical paths; perf baseline set |
| Optimization | Ongoing | Continuous improvement | Monthly flaky elimination, quarterly metric review, bi-annual framework assessment | Flaky <2%; stable execution time |
| Scenario | Approach |
|---|---|
| Greenfield (no tests) | Smoke tests on critical paths first (20-30 cases); grow coverage organically to 70-80% |
| Legacy migration (no coverage) | Golden Master pattern → characterization tests before refactoring → gradual replacement |
| Microservices contract breaks | Consumer-driven contract testing (Pact); replaces E2E between services |
| Event-driven / async | Event schema validation + eventual consistency tests + saga pattern tests |
| Multi-platform (mobile+web+API) | Unify at API layer; platform-specific UI tests only for native functionality |
| Regulated (banking, health, PCI) | Add compliance test layer, data masking, audit trail verification, mandatory pen testing |
| No third-party sandbox | Service virtualization (WireMock); quarterly manual testing against real systems |
| Decision | Habilita | Restringe | Cuando Usar |
|---|---|---|---|
| Full test pyramid (unit-heavy) | Fastest feedback, cheapest to maintain, high isolation | Limited integration confidence, misses contract issues | Monoliths, well-defined APIs, mature codebase |
| Test diamond (integration-heavy) | High confidence in service interactions, catches contract breaks | Slower execution, requires test infrastructure (containers, mocks) | Microservices, event-driven, distributed systems |
| Shift-left maximum (pre-commit gates) | Defects caught earliest (10-100x cheaper to fix), developer ownership | Slower developer workflow, requires team buy-in and tooling investment | High-frequency deployment teams (>1/day), regulated industries |
| Coverage targets (70-80%) | Measurable quality baseline, identifies untested areas | Goodhart's law risk (tests for coverage, not value), maintenance cost | New projects establishing baselines; avoid as sole quality metric |
| Automation-first (minimize manual) | Repeatable, scalable, fast feedback loops | High upfront investment, misses exploratory edge cases | Stable features with clear acceptance criteria; complement with 25-30% exploratory |
IF deployment frequency > 1/day → full CI/CD with automated gates; no manual gates in critical path; feature flags required
IF financial/healthcare/PCI → add data integrity, encryption, audit trail, PII masking tests; mandatory pen testing
IF team has no automation experience → 4-6 week ramp-up; start with API-level; avoid UI frameworks initially
IF performance SLA exists → add perf regression to CI; baseline in staging; alerts on PR threshold violations
IF multiple teams → consumer-driven contract testing; shared test data contracts; monthly contract reviews
IF legacy with no tests → characterization tests first; NEVER refactor without tests
| Caso | Estrategia de Manejo |
|---|---|
| Greenfield sin tests existentes | Smoke tests en critical paths primero (20-30 casos); crecer coverage organicamente a 70-80%; NO refactorizar sin tests previos |
| Legacy migration sin coverage | Golden Master pattern para characterization tests antes de refactoring; reemplazo gradual; priorizar paths de mayor riesgo |
| Contract breaks en microservicios | Consumer-driven contract testing (Pact); reemplaza E2E entre servicios; revisiones mensuales de contratos entre equipos |
| Equipo sin experiencia en automatizacion | Ramp-up de 4-6 semanas; comenzar con API-level; evitar UI frameworks inicialmente; pairing con automation engineer |
| Regulacion estricta (banca, salud, PCI) | Agregar capa de compliance testing, data masking, verificacion de audit trail, pen testing mandatorio; documentar evidencia por gate |
| Decision | Alternativa Descartada | Justificacion |
|---|---|---|
| Test shape driven by architecture (pyramid vs diamond) | Shape unica para todos los proyectos | La forma correcta depende de la arquitectura (monolito vs microservicios), no de la convencion; un diamond en monolito desperdicia recursos |
| Shift-left con gates pre-commit | Testing solo en staging/pre-release | Cada defecto encontrado despues del merge cuesta 10-100x mas; pre-commit hooks y PR gates son inversion, no overhead |
| Quality gates con criterios medibles y timeout | Gates decorativos sin criterios de pass/fail | Un gate sin criterio medible es un semaforo decorativo; cada gate define pass/fail, timeout y escalation path |
| 25-30% del budget para testing exploratorio | 100% automatizacion | La automatizacion no encuentra bugs que no se saben buscar; el testing exploratorio descubre edge cases que la automatizacion misses |
graph TD
subgraph Core["Core: Quality Engineering"]
MAT[Quality Maturity Assessment]
TS[Test Strategy]
AA[Automation Architecture]
QG[Quality Gates]
QM[Quality Metrics]
IP[Implementation Plan]
end
subgraph Inputs["Inputs"]
ARCH[Architecture Type]
CODE[Codebase & Test Infra]
REQ[Quality Requirements]
TEAM[Team Skills]
end
subgraph Outputs["Outputs"]
FRAME[Quality Framework Doc]
SCORE[Maturity Scorecard]
GATES[Gate Criteria Checklist]
DASH[Metrics Dashboard]
end
subgraph Related["Related Skills"]
DSO[devsecops-architecture]
SWA[software-architecture]
OBS[observability]
TEST[testing-strategy]
end
ARCH --> TS
CODE --> MAT
REQ --> QG
TEAM --> AA
MAT --> TS --> AA --> QG --> QM --> IP
IP --> FRAME
IP --> SCORE
IP --> GATES
IP --> DASH
FRAME --> DSO
FRAME --> SWA
DASH --> OBS
TS --> TEST
| Formato | Nombre | Contenido |
|---|---|---|
| Markdown | A-01_Quality_Engineering.md | Framework completo con maturity assessment, test strategy, automation architecture, quality gates, metrics dashboard design y implementation plan. Diagramas Mermaid de pipeline stages y test shape. |
| XLSX | A-01_Quality_Maturity_Scorecard.xlsx | Scorecard interactivo con assessment por dimension (0-100%), gap analysis, DORA benchmark comparison, y plan de mejora con quick wins y long-term improvements. |
| HTML | {fase}_Quality_Engineering_{cliente}_{WIP}.html | Mismo contenido en HTML branded (Design System MetodologIA v5). Self-contained, WCAG AA, responsive. Tipo: Light-First Technical. Incluye maturity scorecard visual por dimension, gate criteria checklist interactivo, y dashboard de métricas leading/lagging. |
| DOCX | {fase}_quality_engineering_{cliente}_{WIP}.docx | Generado via python-docx con MetodologIA Design System v5. Portada, TOC automático, encabezados en Poppins (navy), cuerpo en Montserrat, acentos en gold. Tablas de maturity scorecard, gate criteria y métricas leading/lagging con zebra striping. Encabezados y pies de página con branding MetodologIA. |
| PPTX | {fase}_quality_engineering_{cliente}_{WIP}.pptx | Generado via python-pptx con MetodologIA Design System v5. Slide master con gradiente navy, títulos en Poppins, cuerpo en Montserrat, acentos en gold. Máx 20 slides ejecutivo / 30 técnico. Notas del presentador con referencias de evidencia. Slides: Quality Maturity Assessment (6 dimensiones), Test Strategy Shape, Automation Architecture, Quality Gate Criteria, Metrics Dashboard, Implementation Plan. |
| Dimension | Peso | Criterio |
|---|---|---|
| Trigger Accuracy | 10% | Descripcion activa triggers correctos (test strategy, quality gates, automation, maturity, shift-left) sin falsos positivos con devsecops-architecture o testing-strategy |
| Completeness | 25% | Las 6 secciones cubren maturity, strategy, automation, gates, metrics e implementation sin huecos; todos los pipeline stages con criterios de pass/fail |
| Clarity | 20% | Instrucciones ejecutables sin ambiguedad; cada gate tiene criterio medible, timeout y escalation; metricas con targets numericos |
| Robustness | 20% | Maneja greenfield, legacy, microservicios, event-driven, multi-plataforma y regulacion estricta con adaptaciones especificas |
| Efficiency | 10% | Proceso no tiene pasos redundantes; variante ejecutiva reduce a S1+S4+S5 sin perder capacidad de decision sobre gates y metricas |
| Value Density | 15% | Cada seccion aporta valor practico directo; maturity scorecard y gate criteria son herramientas de decision inmediata |
Umbral minimo: 7/10.
Before delivering quality engineering output:
| Format | Default | Description |
|---|---|---|
markdown | Yes | Rich Markdown + Mermaid diagrams. Token-efficient. |
html | On demand | Branded HTML (Design System). Visual impact. |
dual | On demand | Both formats. |
Default output is Markdown with embedded Mermaid diagrams. HTML generation requires explicit {FORMATO}=html parameter.
Primary: A-01_Quality_Engineering.html — Maturity assessment, test strategy, automation architecture, quality gates, metrics dashboard, implementation plan.
Secondary: Quality maturity scorecard, gate criteria checklist, metrics dashboard template, implementation timeline.
Autor: Javier Montaño | Última actualización: 12 de marzo de 2026