Universal current-state assessment producing 10-section analysis for ANY MetodologIA service type. Use when the user asks to "analyze the codebase", "assess current architecture", "run AS-IS analysis", "technical audit", "evaluate tech debt", "code quality assessment", "assess current state", "service assessment", "QA maturity", "PMO assessment", "RPA readiness", "data maturity", "cloud readiness", "design maturity", "talent gap analysis", or mentions "Phase 1", "current state", "legacy system review", "technical health check".
From pmnpx claudepluginhub javimontano/mao-pm-apexThis skill is limited to using the following tools:
examples/README.mdexamples/sample-output.htmlexamples/sample-output.mdexamples/sample-output.pptx-spec.mdexamples/sample-output.xlsx-spec.mdprompts/metaprompts.mdprompts/use-case-prompts.mdreferences/body-of-knowledge.mdreferences/knowledge-graph.mmdreferences/service-variants.mdreferences/state-of-the-art.mdSearches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Runs phased verification: build, types, lint, tests with coverage, security scans for secrets/console.log, and git diff review before PRs or after changes.
Generates a 10-section current-state assessment for ANY MetodologIA service type (SDA, QA, Management, RPA, Data-AI, Cloud, SAS, UX-Design). For software codebases (SDA), produces: executive dashboard, technology inventory, code structure, C4 architecture, code quality metrics, technical debt inventory, NFR heatmap, security posture, operational model, and risk register with prioritized recommendations. For other service types, sections S1-S8 adapt to domain-specific dimensions while S0 (Executive Dashboard), S9 (Risk Register), and S10 (Recommendations) remain universal.
No se puede trazar un camino hacia el futuro sin comprender con honestidad brutal dónde se está parado hoy.
$1 — Path to codebase root (default: current working directory)$2 — Analysis depth: full (default), executive (sections 0,5,9,10 only)Parse from $ARGUMENTS.
Parameters:
{MODO}: piloto-auto (default) | desatendido | supervisado | paso-a-paso
{FORMATO}: markdown (default) | html | dual{VARIANTE}: ejecutiva (~40% — sections S0, S5, S9, S10 only) | técnica (full, default){TIPO_SERVICIO}: SDA (default) | QA | Management | RPA | Data-AI | Cloud | SAS | UX-Design
Before starting analysis, detect the service type from context:
{TIPO_SERVICIO} explicitly provided → use itConfirm: "Tipo de servicio detectado: {X}. ¿Confirma o desea ajustar?"
Auto-detect codebase characteristics before starting analysis:
# Language detection
find . -name "*.ts" -o -name "*.py" -o -name "*.java" -o -name "*.go" -o -name "*.rs" | head -30
# Build system detection
ls -la package.json pom.xml build.gradle Cargo.toml go.mod setup.py pyproject.toml Makefile 2>/dev/null
# Infrastructure detection
find . -name "Dockerfile" -o -name "*.yaml" -path "*/k8s/*" -o -name "docker-compose*" | head -10
# API surface detection
find . -name "*.yaml" -path "*/swagger/*" -o -name "openapi*" -o -name "*.proto" | head -10
Use detected languages, build tools, and infrastructure to scope each section.
Mandatory (varies by service type):
| Service Type | Mandatory Inputs |
|---|---|
| SDA | Complete codebase with commit history, build configuration, deployment configuration |
| QA | Test suite documentation, QA processes, tool landscape inventory, defect metrics |
| Management | PMO artifacts, methodology documentation, team assessments, delivery metrics |
| RPA | Process documentation (BPMN), bot inventory, automation logs, process metrics |
| Data-AI | Data catalog, pipeline documentation, model registry, data quality reports |
| Cloud | Infrastructure inventory, cloud accounts, monitoring dashboards, cost reports |
| SAS | Team composition, skills matrix, project history, utilization reports |
| UX-Design | Design system, research repository, usability reports, accessibility audits |
Recommended (all types):
Assumptions:
Cannot do:
| Missing Input | Impact | Workaround |
|---|---|---|
| No deployment config | Cannot assess infrastructure | Infer from Dockerfiles, K8s manifests, CI/CD scripts; flag as assumption |
| No API specs | Cannot fully document integrations | Reverse-engineer from code (HTTP clients, REST annotations, gRPC stubs) |
| No security audit | Cannot benchmark against standards | Lightweight SAST scan (SQL injection, hardcoded secrets, weak crypto patterns) |
| No performance data | Cannot assess NFR baseline | Code-level heuristics (complexity suggests bottlenecks) + recommend profiling |
| <1 year history | Cannot assess trends | Current snapshot only; flag as point-in-time analysis |
| Monorepo unclear | Cannot map boundaries | Infer from package naming, deployment units, team ownership patterns |
| Decision | Enables | Constrains | When to Use |
|---|---|---|---|
| Full 10-section analysis | Maximum depth, complete audit trail | 5-7 days, high token cost | High-stakes modernization, regulated environments |
| Executive variant (S0+S5+S9+S10) | Fast insights, decision-ready | Misses detailed architecture/quality data | Time-constrained, executive audience |
| Security-focused (S7 deep) | Compliance-ready, vulnerability inventory | Narrower scope | Pre-audit, compliance-driven engagements |
| Quality-focused (S4+S5 deep) | Actionable tech debt remediation plan | Less architecture context | Tech debt reduction initiatives |
System snapshot: LOC, modules, integrations, team size, years in production, tech stack summary, development status, maintenance cost estimate. Health score (1-10) with color-coded indicators.
Per layer: Backend, Frontend, Data, Infrastructure, Development. Dependency tree table with EOL status. Flag deprecated dependencies. Version currency score per component.
Module decomposition, coupling analysis (afferent/efferent), layering assessment, cyclomatic complexity distribution, anti-patterns (god classes, circular dependencies, duplication). Package cohesion metrics.
Level 1 (Context): system as black box with external actors. Level 2 (Containers): major services, databases, data flows. Pattern catalog with quality assessment. Architecture fitness functions where applicable.
Complexity distribution (p50, p95), duplication %, test coverage by layer, dependency depth, code smells by type. Dashboard with severity-coded cards. Trend analysis if git history available.
Per item: description, category (7 types: design, code, test, build, documentation, infrastructure, dependency), severity, technical impact, business impact, remediation pathway, prioritization score (impact x cost-to-fix).
Conditional logic:
7x5 matrix: performance, security, maintainability, scalability, reliability, usability, interoperability. Scored 1-10 with evidence. Gap analysis against targets. Priority ranking by business impact.
Authentication, authorization, encryption, data protection, known CVEs (SBOM analysis), compliance gaps. Severity-rated findings with remediation recommendations. OWASP Top 10 mapping where applicable.
Deployment model, monitoring/observability, incident response (MTTR), release management, capacity management. Operational readiness scorecard. DevOps maturity assessment (DORA metrics if available).
Top-10 risks: probability x impact matrix. Per risk: category, score, current mitigations, recommended improvements, owner, status. Risk velocity indicator (growing/stable/shrinking).
Top 5-10 findings with root cause + business impact. Quick wins (under 5 eng-days). Strategic roadmap (immediate/short/medium/long-term). Refactor vs rewrite vs replace decision tree per major component.
When {TIPO_SERVICIO} ≠ SDA, sections S0, S9, and S10 remain universal. Sections S1-S8 adapt to the service type:
{TIPO_SERVICIO}=QA){TIPO_SERVICIO}=Management){TIPO_SERVICIO}=RPA){TIPO_SERVICIO}=Data-AI){TIPO_SERVICIO}=Cloud){TIPO_SERVICIO}=SAS){TIPO_SERVICIO}=UX-Design)Every recommendation in S10 must reference evidence from S0-S9:
Typical engagement: 5-7 days for systems under 500K LOC.
Primary: 03_Analisis_AS-IS_{TIPO_SERVICIO}_{project}.md (o .html si {FORMATO}=html|dual) — Full 10-section current-state assessment with domain-specific analysis, debt inventory, risk register, and prioritized recommendations. When {TIPO_SERVICIO}=SDA, for backward compatibility also accept 03_Analisis_AS-IS_{project}.md.
Secondary: 02_Brief_Tecnico_{project}.md — Executive summary (S0 + key findings).
Diagramas incluidos:
| Format | Default | Description |
|---|---|---|
markdown | ✅ | Rich Markdown + Mermaid diagrams. Token-efficient. |
html | On demand | Branded HTML (Design System). Visual impact. |
dual | On demand | Both formats. |
Default output is Markdown with embedded Mermaid diagrams. HTML generation requires explicit {FORMATO}=html parameter.
| Caso | Estrategia de Manejo |
|---|---|
| Monorepo con multiples deployment units | Descomponer por deployment unit. Analizar coupling entre units. Metricas por servicio separado. |
| No CI/CD configurado | Inferir de Dockerfiles, cloud configs, README scripts. Flag inference risk explicitamente. |
| No test suite existente | Flag coverage como CRITICAL (0%). Extrapolar quality risk via complexity. Recomendar test buildout prioritario. |
| Multiples lenguajes en codebase | Metricas por lenguaje separadas. +1 risk por lenguaje adicional por carga de integracion. |
| Sistema >500K LOC | Analisis faseado: Tier 1 core domains, Tier 2 supporting. Executive summary + deep-dives priorizados. |
| Framework EOL detectado | Escalar a CRITICAL risk. Documentar security exposure y upgrade path complexity. |
| Vendor lock-in con dependencias propietarias | Flag dependencias propietarias con migration cost estimates y alternativas open-source. |
| Decision | Alternativa Descartada | Justificacion |
|---|---|---|
| 10 secciones como framework universal | Framework de 5 secciones, assessment libre | 10 secciones cubren el espectro completo (exec, tech, arch, quality, debt, NFR, security, ops, risk, recommendations). Sections S0, S9, S10 son universales cross-service-type. |
| Service-type variants para S1-S8 | Framework unico para todos los tipos de servicio | SDA, QA, Management, RPA, Data-AI, Cloud, SAS, UX-Design tienen dimensiones de evaluacion fundamentalmente diferentes. Adaptar S1-S8 maximiza relevancia. |
| Evidence-based diagnostico con tags | Opinion-based assessment | Tags [CODIGO], [CONFIG], [DOC], [INFERENCIA], [SUPUESTO] garantizan trazabilidad. Cada hallazgo tiene backing verificable. |
| Cross-section traceability (S10 to S0-S9) | Recomendaciones desconectadas de hallazgos | Toda recomendacion S10 referencia evidencia de secciones previas. Elimina recomendaciones sin fundamento. |
graph TD
subgraph Core["Conceptos Core"]
EXEC["S0: Executive Dashboard"]
TECH["S1-S8: Domain Sections"]
DEBT["S5: Technical Debt"]
RISK["S9: Risk Register"]
RECS["S10: Recommendations"]
end
subgraph Inputs["Entradas"]
CODE["Codebase / Artifacts"]
CONFIG["Build & Deploy Config"]
HISTORY["Operational History"]
TIPO["Service Type"]
end
subgraph Outputs["Salidas"]
ASIS["AS-IS Analysis Report"]
BRIEF["Technical Brief"]
C4["C4 Diagrams"]
NFR["NFR Heatmap"]
end
subgraph Related["Skills Relacionados"]
TOBE["architecture-tobe"]
SCENARIOS["scenario-evaluation"]
COST["cost-estimation"]
SECURITY["security-assessment"]
end
CODE --> TECH
CONFIG --> TECH
HISTORY --> RISK
TIPO --> TECH
EXEC --> ASIS
TECH --> DEBT
DEBT --> RISK
RISK --> RECS
RECS --> ASIS
ASIS --> BRIEF
TECH --> C4
TECH --> NFR
ASIS -.-> TOBE
RISK -.-> SCENARIOS
DEBT -.-> COST
TECH -.-> SECURITY
Formato Markdown (default):
# Analisis AS-IS: {project} ({TIPO_SERVICIO})
## S0: Executive Dashboard
| Indicador | Valor |
| LOC | ... |
| Health Score | .../10 |
## S1-S8: [Domain-Specific Sections]
## S5: Technical Debt Inventory
| Item | Category | Severity | Impact | Remediation | Priority Score |
...
## S9: Risk Register
| Risk | Probability | Impact | Score | Mitigations | Owner |
...
## S10: Recommendations
### Quick Wins (<5 eng-days)
### Strategic Roadmap
Formato HTML (bajo demanda):
HTML branded con Design System MetodologIA:
- Executive Dashboard con color-coded health indicators
- C4 Diagrams interactivos (Mermaid rendered)
- NFR Heatmap como quadrant chart visual
- Collapsible sections para S1-S8
- Risk Register con severity color coding
- Responsive layout para presentacion en pantalla
03_Analisis_AS-IS_{TIPO_SERVICIO}_{project}_{WIP}.htmlFormato DOCX (bajo demanda):
{fase}_{entregable}_{cliente}_{WIP}.docxFormato XLSX (bajo demanda):
{fase}_{entregable}_{cliente}_{WIP}.xlsxFormato PPTX (bajo demanda):
{fase}_{entregable}_{cliente}_{WIP}.pptx| Dimension | Peso | Criterio |
|---|---|---|
| Trigger Accuracy | 10% | Activacion correcta ante keywords de AS-IS, current state, tech debt, code quality, architecture assessment, y variantes por service type. |
| Completeness | 25% | 10 secciones completas con service-type adaptation. Cross-section traceability S10 a S0-S9 verificable. |
| Clarity | 20% | Health score 1-10 interpretable. Debt items con severity scoring cuantitativo. NFR heatmap con evidencia. |
| Robustness | 20% | 8 service types soportados. Edge cases (monorepo, no CI/CD, >500K LOC, multi-language, EOL frameworks) manejados. |
| Efficiency | 10% | Variante ejecutiva reduce a S0+S5+S9+S10 (~40%). Context injection automatica detecta lenguaje y build system. |
| Value Density | 15% | S10 produce quick wins (<5 dias) y strategic roadmap. Debt scored por impact x cost-to-fix. Risk register con velocity indicator. |
Umbral minimo: 7/10. Debajo de este umbral, revisar evidence backing de findings y cross-section traceability.
Autor: Javier Montano · Comunidad MetodologIA | Ultima actualizacion: 15 de marzo de 2026