PMO-APEX v1.0 — Agentic Project Excellence. Living ontology (CLAUDE.md hub + 13 sub-files), progressive MOAT loading (L1/L2/L3), G0 security gate, context optimization, rendering engine, CLI init wizard, PM retrospective, browser audit (MCP Playwright), cross-platform skill conversion, meta-cognition protocols (FULL/LIGHT), formalized committee spawning (Steering Committee 7 Advisors). 48 agentes, 109 skills MOAT, 103 comandos, 19 scripts, 5 quality gates (G0-G3). Zero-hallucination protocol. Evidence tagging obligatorio. NUNCA precios — solo magnitudes de esfuerzo.
npx claudepluginhub javimontano/mao-pm-apexAdvance to next pipeline step — automatically determines and executes the next action in the active pipeline
Agile assessment and discovery — comprehensive evaluation of Agile practices, culture, and improvement opportunities
Assess Agile maturity — Scrum, Kanban, XP practices, team self-organization, continuous improvement
Assess governance model — decision effectiveness, escalation efficiency, compliance, stakeholder satisfaction
Assess PMO maturity — OPM3/P3M3 framework, capability areas, improvement roadmap
Assess portfolio health — strategic alignment, resource balance, risk distribution, value delivery
Assess organizational risk tolerance — risk appetite, tolerance thresholds, risk culture, escalation effectiveness
Generate 05_Risk_Register — risk identification, P-I matrix, response strategies, monitoring plan
Assess SAFe readiness — ART configuration, PI planning maturity, value stream alignment, lean governance
Assess team effectiveness — Team Topologies alignment, skills gap, collaboration patterns, cognitive load
Assess vendor performance — vendor scorecard, SLA compliance, risk assessment, relationship health
Assess PMBOK compliance — process groups, knowledge areas, documentation quality, gate adherence
Audit PM deliverables — 10-criterion excellence scorecard, cross-checks, quality verdict
Alias for audit-quality
Alias for run-auto
Backlog grooming/refinement — 60-minute structured refinement with estimation, acceptance criteria, and prioritization
Maturity benchmark vs industry — compare PM practices against industry standards and peer organizations
Alias for benchmark-maturity
Visual audit of PM tools via Playwright — screenshot and analyze project management tool dashboards
Alias for track-budget
Alias for track-burndown
Alias for generate-charter
Generate 14_Closure_Report — closure checklist, acceptance log, final metrics, knowledge transfer, archive protocol
Alias for close-project
Alias for plan-communications
Generate 05b_Steering_Committee — governance charter, escalation matrix, decision authority, meeting cadence
Convert APEX skills to Cursor, Codex, Gemini — cross-platform skill conversion
Alias for standup
Alias for present-dashboard
Alias for run-deep
Generate 02_Scope_Statement_WBS — scope definition, WBS decomposition, acceptance criteria
Generate 09_Kickoff_Package — kickoff agenda, presentation, team introductions, ground rules, first iteration plan
Guided demo of APEX — interactive tour of PMO-APEX capabilities with sample project
Generate 06_Methodology_Playbook — methodology selection, ceremony calendar, DoD, DoR, estimation approach
DevOps-PM alignment assessment — evaluate integration between DevOps practices and project management
Alias for report-evm
Export deliverable to PDF with APEX branding — header, footer, colors, typography
Alias for run-express
Generate 00_Project_Charter — the governing document for the entire project engagement
Alias for run-guided
Alias for track-team-health
Hybrid methodology assessment — evaluate blend of Agile and traditional practices, optimization opportunities
Evolve PM deliverables with feedback — incorporate stakeholder feedback, audit findings, and new information
Alias for improve-deliverables
Wizard for new project initialization — create project/ structure, session files, and initial calibration
Kanban assessment — flow analysis, WIP limits, pull system maturity, continuous improvement
Alias for deliver-kickoff
Alias for report-lessons
Generate 01_Stakeholder_Register — stakeholder map, influence-interest matrix, RACI, engagement strategy
General PM maturity evaluation — comprehensive multi-dimensional maturity assessment across all PM domains
Command palette — categorized interactive menu of all APEX commands, agents, and pipeline steps
Alias for design-methodology
Context window optimization — analyze token usage, prune unnecessary context, maximize effective context
SAFe PI planning facilitation — full-day structured PI planning with program board, confidence vote, and risk ROAMing
Alias for pi-planning
Generate 07_Communication_Plan — communication matrix, meeting schedule, escalation procedures, stakeholder engagement
Generate 04_Resource_Plan — resource requirements, team topology, RACI, skills gap analysis
Generate 03_Schedule_Baseline — schedule with Gantt, critical path, 3-point estimates, Monte Carlo
Alias for sprint-planning
PMO assessment and discovery — PMO value, structure, services, maturity, improvement roadmap
Alias for review-pmo
Portfolio management assessment — strategic alignment, prioritization, resource optimization, governance
Generate 08_Executive_Dashboard — project health dashboard with KPIs, traffic lights, forecasts
Explore project docs/URLs, generate priming-rag files for PM context loading
Alias for prime-project
Alias for backlog-refinement
Render Mermaid diagrams to PNG — extract and render all Mermaid blocks from deliverables
Generate 11_EVM_Analysis — Earned Value Management analysis with CPI, SPI, forecasts, and trends
Generate 12_Lessons_Learned — lessons learned register, retrospective framework, knowledge capture protocol
Generate 10_Status_Report — periodic status report with RAG, variances, risks, and decisions
Rescue stalled project planning — diagnose blockers, propose recovery plan, re-energize pipeline
Alias for rescue-stalled
Alias for plan-resources
Sprint/iteration retrospective — 60-minute structured retro with action items and improvement tracking
Quantitative retrospective of PM engagement — metrics-driven review of the entire PM planning engagement
Generate 13_PMO_Health_Check — INTERNAL PMO maturity assessment, process compliance, improvement recommendations
Alias for sprint-review
Alias for assess-risks
Autonomous PM pipeline — runs the full pipeline with minimal user intervention (16 deliverables, 4 gates auto-approved)
Deep PM governance — comprehensive planning with steering committee (7 deliverables, G1+G2)
Express PM pipeline — Quick Charter + Schedule + Dashboard in 1 session (3 deliverables, G1 simplified)
Guided PM pipeline — full project planning with human facilitation (16 deliverables, 4 gates with pause)
SAFe assessment and discovery — ART readiness, value stream analysis, scaling evaluation
Scan for exposed credentials — Gate G0 security scan before pipeline execution
Alias for plan-schedule
Alias for define-scope
Monte Carlo what-if on schedule — probabilistic simulation of schedule outcomes with sensitivity analysis
Sprint planning session — 120-minute structured planning with capacity, goal setting, and task breakdown
Sprint review / demo facilitation — 60-minute structured review with stakeholder feedback and backlog adaptation
Alias for map-stakeholders
Daily standup facilitation — 15-minute structured standup with impediment tracking and action items
Alias for report-status
Alias for convene-steering
Budget variance and forecast — planned vs actual FTE-months, variance analysis, EAC forecast
Sprint/release burndown chart — track remaining work vs time with trend analysis and completion forecast
Cross-project dependency tracking — dependency map, critical path impact, risk assessment, resolution status
Milestone progress and forecast — baseline vs actual dates, variance analysis, completion probability
Risk monitoring dashboard — active risks, trigger status, trend analysis, escalation alerts
Scope creep detection — baseline vs current scope, change request analysis, impact assessment
Team morale, velocity, satisfaction — team health radar with trends and intervention recommendations
Velocity tracking and prediction — historical velocity, trend analysis, capacity planning forecast
Alias for track-velocity
Traditional PM assessment — PMBOK alignment, phase-gate compliance, documentation quality
Shared configuration inherited by all PM agents. Not a standalone agent.
Agile transformation expert who assesses organizational maturity, coaches teams and leadership on agile principles, designs adoption roadmaps, and facilitates cultural change toward agility.
Internal and external audit preparation expert who designs audit programs, gathers evidence, conducts readiness assessments, and manages audit findings through remediation.
Benefits realization expert who tracks KPIs, measures ROI, monitors value delivery, and ensures projects achieve their intended business outcomes beyond just deliverable completion.
Budget baseline expert managing EVM calculations, cost tracking, variance analysis, and financial forecasting. Ensures projects stay within approved financial boundaries.
Meeting facilitation expert who designs and facilitates project ceremonies, retrospectives, workshops, and decision-making sessions with structured techniques and clear outcomes.
Change request management expert who processes change requests, conducts impact analysis, facilitates the Change Control Board (CCB), and ensures no uncontrolled changes enter approved baselines.
Stakeholder communication expert who designs reporting cadences, manages information flow, facilitates ceremonies, and ensures the right information reaches the right people at the right time.
Regulatory and contractual compliance expert who maps applicable regulations, maintains audit trails, validates compliance posture, and ensures projects meet legal and organizational requirements.
Business continuity and disaster recovery expert for projects who designs contingency plans, fallback strategies, and recovery protocols to ensure project resilience against disruptions.
Cost estimation expert specializing in parametric, analogous, and bottom-up estimation techniques with uncertainty ranges and confidence levels for project budgeting.
Project metrics and analytics expert who designs dashboards, analyzes project data trends, produces data-driven insights, and enables evidence-based project decision-making.
Senior delivery expert managing timeline, scope, resources, velocity, and burndown. Ensures the project stays on track, stakeholders stay informed, and teams deliver predictably.
CI/CD alignment expert who coordinates release cadence with PM processes, ensures deployment readiness, and bridges the gap between development velocity and project governance.
Narrative coherence and editorial quality expert who ensures all project documents maintain consistent voice, structure, and professional standards across the entire deliverable portfolio.
Earned Value Management specialist who calculates CPI/SPI, produces EVM dashboards, forecasts project financial outcomes, and provides early warning of cost and schedule variances.
Sponsor management expert who maintains executive alignment, manages escalation protocols, prepares sponsor briefings, and ensures sustained executive commitment to project success.
Financial analysis expert specializing in NPV, IRR, payback period, sensitivity analysis, and financial modeling to support project investment decisions and business case development.
Multi-format production expert who converts project deliverables into HTML, DOCX, XLSX, PDF, and presentation formats while maintaining branding consistency and accessibility standards.
Cross-project dependency and interface management expert who maps dependencies, manages integration points, coordinates release alignment, and prevents cascade failures.
Kanban flow expert who optimizes work-in-progress limits, cycle time, throughput, and flow efficiency. Designs and maintains Kanban boards and coaches teams on pull-based delivery.
Lessons learned and organizational memory expert who captures project knowledge, maintains knowledge repositories, facilitates knowledge sharing, and prevents repeat mistakes.
Methodology selection expert who evaluates project context to recommend the optimal framework (Agile, Traditional, Hybrid, SAFe), designs ceremonies, and ensures framework fidelity throughout the project lifecycle.
Team onboarding and knowledge transfer expert who designs onboarding programs, knowledge transfer protocols, and documentation standards to accelerate new team member productivity.
PMBOK 7th edition expert covering performance domains, project principles, tailoring guidance, and knowledge area integration for traditional and hybrid project management.
PMO setup and governance expert who designs PMO structures, governance models, maturity assessments, and standardized processes for organizational project management capability.
Portfolio optimization expert who analyzes project prioritization, resource allocation across projects, portfolio balance, strategic alignment, and cross-project dependencies.
PRINCE2 framework expert specializing in stage-based management, business case justification, exception management, and product-based planning for controlled project delivery.
Procurement expert managing RFPs, vendor evaluation, contract management, make-or-buy decisions, and procurement planning for projects requiring external goods or services.
Product backlog management expert who facilitates prioritization, writes user stories, maximizes value delivery, and serves as a bridge between stakeholders and the delivery team.
Impartial orchestrator that sequences phases, enforces gates, manages contracts, declares the expert committee, maintains the project plan and input registry. Does NOT analyze — only coordinates.
Validates every deliverable against acceptance criteria, catches inconsistencies between phases, and provides final quality sign-off before gate presentations.
QA processes and continuous improvement expert specializing in quality planning, quality control, Six Sigma tools, and process capability assessment for project deliverables.
Dashboard design and data storytelling expert who creates KPI visualizations, executive reports, and project dashboards that communicate project health and drive informed decisions.
Capacity planning and resource allocation expert who optimizes team utilization, resolves resource conflicts, manages resource leveling, and forecasts resource needs across the project lifecycle.
Quantitative risk analysis expert specializing in Monte Carlo simulation, decision trees, expected monetary value, sensitivity analysis, and probabilistic risk modeling for informed project decisions.
Quality, compliance, and evidence audit expert. Controls assumptions, validates evidence chains, manages the risk register, and ensures no deliverable passes a gate without rigorous validation.
SAFe framework expert specializing in PI Planning, Agile Release Trains, value streams, and scaled agile practices for large enterprise programs.
Safety-critical project expert specializing in FMEA, hazard analysis, safety integrity levels, and regulatory safety compliance for projects where failures can cause harm.
Schedule baseline expert who builds and maintains Gantt charts, identifies critical paths, manages dependencies, and produces schedule forecasts using CPM and Monte Carlo techniques.
WBS decomposition expert managing scope definition, scope management, and change control. Ensures every deliverable is traceable to approved scope and no uncontrolled changes enter the project.
Scrum framework expert who facilitates sprint ceremonies, coaches the team on Scrum values, manages the sprint backlog, and removes impediments to maintain sustainable delivery pace.
Stakeholder engagement expert who maps influence networks, designs engagement strategies, navigates organizational politics, and ensures stakeholder alignment throughout the project lifecycle.
Skills inventory and gap analysis expert who assesses team competencies, identifies skill gaps, designs training plans, and ensures the project has the right capabilities for delivery.
Team Topologies framework expert who designs team structures, applies Conway's Law, defines team interaction modes, and optimizes organizational design for effective project delivery.
Technical decision authority who ensures architecture alignment, technical feasibility, technology selection, and engineering standards throughout project delivery.
PM tools configuration expert who manages Jira, Azure DevOps, Monday.com, and other project management tools — workflows, automation, permissions, and integration with PM processes.
Vendor performance and relationship management expert who monitors SLAs, manages contract compliance, conducts vendor reviews, and maintains productive vendor partnerships.
Extreme Programming practices expert who coaches teams on pair programming, TDD, continuous integration, collective code ownership, and sustainable pace for high-quality software delivery.
WCAG 2.1/2.2 compliance assessment — a11y testing strategy, remediation priorities, inclusive design. Use when the user asks to "audit accessibility", "assess WCAG compliance", "evaluate a11y", "review inclusive design", or mentions screen readers, ARIA, color contrast, keyboard navigation.
Adoption strategy design producing communication plan, training roadmap, resistance management tactics, and reinforcement mechanisms. Use when the user asks to "design adoption strategy", "plan change adoption", "communication plan", "training needs analysis", "resistance management", "adoption roadmap", "change communication", or mentions "post-implementation adoption", "user onboarding strategy", "technology adoption plan".
Use when the user asks to "assess agile maturity", "evaluate agile practices", "run agile readiness check", "benchmark Scrum adoption", or "audit agile capabilities". Activates when a stakeholder needs to measure agile adoption level, evaluate Scrum maturity, diagnose agile anti-patterns, compare agile readiness across teams, or baseline agile capability before a transformation initiative.
Audits existing AI system architectures against best practices — structural integrity, AI quality attributes, pattern adherence, anti-pattern detection, security compliance, and technical debt inventory. This skill should be used when the user asks to "audit AI architecture", "review ML system quality", "assess AI technical debt", "evaluate AI compliance", "detect AI anti-patterns", "review AI security posture", or mentions AI architecture review, AI system assessment, AI quality audit, drift monitoring audit, or AI governance review.
Guides implementation of AI system architectures — technology selection, pipeline implementation, model serving setup, monitoring deployment, and CI/CD automation. This skill should be used when the user asks to "implement AI architecture", "build ML pipeline", "set up model serving", "deploy AI system", "implement MLOps", "configure drift monitoring", "set up feature store", or mentions AI implementation plan, ML infrastructure setup, model deployment guide, RAG implementation, or agent framework setup.
AI Center services discovery — AI readiness assessment using MetodologIA AI SCALE methodology, use case portfolio prioritization, data readiness evaluation, model inventory, AI governance assessment, infrastructure evaluation, MetodologIA AI product integration, and AI adoption roadmap. Use when the user asks to "assess AI readiness", "evaluate AI maturity", "AI discovery", "AI use case prioritization", "MLOps assessment", "AI governance evaluation", "AI adoption roadmap", "AI strategy assessment", "evaluate AI infrastructure", "AI product fit", or mentions "AI SCALE", "responsible AI", "AI pilots", "ML pipeline", "AI Center of Excellence", "LLM adoption", "generative AI strategy".
Concept of Operations (CONOPS) for AI systems — system vision, stakeholder mapping, AI-human interaction spectrum, business value assessment, success metrics, and operational modes. This skill should be used when the user asks to "define the AI operational concept", "map AI stakeholders", "design AI-human interaction levels", "assess AI business value", "define AI success metrics", "plan AI operational modes", or mentions CONOPS, IEEE 1362, AI autonomy levels, AI value matrix, or AI system vision.
AI-specific design patterns and system tactics — Feature Store, Champion-Challenger, Shadow Deployment, Drift Detection, Explainability Wrapper, Canary Deployment, Bulkhead, and traditional patterns adapted for AI. This skill should be used when the user asks to "select AI design patterns", "apply ML patterns", "design drift detection", "implement feature store", "plan shadow deployment", "design champion-challenger", "select availability tactics for AI", or mentions AI anti-patterns, maintainability tactics, fault recovery for models, or pattern selection for ML systems.
AI pipeline architecture design — development pipelines, production pipelines, data stores, model registry, CI/CD for AI, and non-functional requirements. This skill should be used when the user asks to "design AI pipelines", "architect ML pipelines", "select data stores for AI", "design model registry", "implement CI/CD for ML", "define AI pipeline requirements", or mentions MLOps, training pipeline, inference pipeline, feature pipeline, Blue and Gold deployment, or pipeline patterns.
Use when the user asks to "use AI for project management", "augment PM with AI", "implement predictive scheduling", "parse status with NLP", or "design ML risk models". Activates when a stakeholder needs to identify AI augmentation opportunities for PM, build predictive scheduling models, automate status report parsing with NLP, design intelligent resource allocation, or create a human-AI collaboration model for project governance.
AI software architecture design — modules, layers, boundaries, design patterns, ADRs, quality attributes, and technical debt strategy for AI-enabled systems. This skill should be used when the user asks to "design AI system structure", "define AI module boundaries", "select AI architecture patterns", "document AI architecture decisions", "evaluate AI code architecture", or mentions AI pipelines, feature stores, model serving, drift detection, ML quality attributes, explainability architecture, or AI technical debt.
Comprehensive testing strategy for AI systems — testing scope matrix (6 types x 6 layers), model prediction testing, data quality testing, compliance and fairness testing, integration approaches, and CI/CD test automation. This skill should be used when the user asks to "define AI testing strategy", "test ML models", "design data quality tests", "plan fairness testing", "test AI pipelines", "design integration tests for ML", or mentions adversarial testing, drift simulation, model regression testing, bias testing, explainability testing, or AI test automation.
Análisis horizontal de estados financieros (P&L, Balance General, Flujo de Caja, notas/anexos) con comparación YoY de 2 períodos. Genera informes ejecutivos estandarizados para Junta Directiva y C-Level con variaciones absolutas, relativas, drivers de cambio, alertas y recomendaciones estratégicas. Usa esta skill SIEMPRE que el usuario mencione análisis horizontal, comparar estados financieros, variación año contra año, YoY, análisis de tendencias financieras, comparación de períodos, evolución financiera, cambios interanuales, delta financiero, o adjunte cualquier estado financiero y pida análisis comparativo. Trigger: análisis horizontal, comparar estados financieros, YoY, variación interanual, evolución financiera, delta financiero, comparar períodos.
Analytics pipeline design — dbt-style transformations, data modeling, testing, documentation. Use when the user asks to "design analytics models", "set up dbt project", "plan data transformations", "define data contracts", "model star schema", or mentions staging models, marts, incremental strategies, or materializations.
API design & governance — REST/GraphQL/gRPC, versioning, rate limiting, DX, contract-first. Use when the user asks to "design an API", "define API strategy", "implement contract-first", "set up API governance", "design API versioning", "improve developer experience", or mentions REST, GraphQL, gRPC, AsyncAPI, OpenAPI, API gateway, rate limiting, or API catalog.
Target state (TO-BE) architecture design — C4 L2 containers, ADRs, nightmare scenario mitigations, MVP component, phased Strangler Fig migration. Use when the user asks to "design the target architecture", "create a TO-BE architecture", "plan a migration strategy", "define ADRs for a new system", "mitigate nightmare scenarios", or mentions Strangler Fig, C4 diagrams, saga pattern, anti-corruption layer, or legacy modernization.
Universal current-state assessment producing 10-section analysis for ANY MetodologIA service type. Use when the user asks to "analyze the codebase", "assess current architecture", "run AS-IS analysis", "technical audit", "evaluate tech debt", "code quality assessment", "assess current state", "service assessment", "QA maturity", "PMO assessment", "RPA readiness", "data maturity", "cloud readiness", "design maturity", "talent gap analysis", or mentions "Phase 1", "current state", "legacy system review", "technical health check".
Use when the user asks to "track assumptions", "document constraints", "log assumptions", "manage project assumptions", or "validate planning hypotheses". Activates when a stakeholder needs to create an assumption register, document project constraints, link assumptions to risks, establish assumption validation cadence, or audit planning hypotheses across the project lifecycle.
Use when the user asks to "identify PM automation candidates", "automate PM reporting", "reduce manual PM processes", "find automation quick wins", or "design workflow automation". Activates when a stakeholder needs to scan PM processes for automation potential, calculate automation ROI, design automation specifications, prioritize automation backlog, or plan phased automation rollout across the PMO.
Audits AWS AI/GenAI architectures against the Well-Architected GenAI Lens — operational excellence, security, reliability, performance, cost optimization, and sustainability. This skill should be used when the user asks to "audit AWS AI architecture", "review Bedrock configuration", "assess SageMaker security", "optimize AWS AI costs", "evaluate AWS GenAI compliance", "review AWS Well-Architected for AI", or mentions AWS AI audit, Bedrock audit, SageMaker review, AWS GenAI security assessment, or AWS AI cost optimization review.
Designs AWS cloud architectures for AI and GenAI workloads applying the Well-Architected Framework GenAI Lens (6 pillars: GENOPS, GENSEC, GENREL, GENPERF, GENCOST, GENSUS), AWS service selection matrices, RAG/Agent/Fine-Tuning patterns, cost optimization strategies, and enterprise reference architectures. Activated when designing, evaluating, or migrating AI systems on AWS.
Guides implementation of AI/GenAI architectures on AWS — Bedrock setup, SageMaker pipelines, OpenSearch vector stores, API Gateway configuration, security hardening, cost controls, and deployment automation. This skill should be used when the user asks to "implement AI on AWS", "set up Bedrock", "deploy SageMaker pipeline", "configure OpenSearch for RAG", "implement AWS AI security", "set up AWS AI monitoring", or mentions AWS AI deployment, Bedrock Knowledge Base setup, SageMaker endpoint deployment, AWS GenAI implementation, or AWS AI CI/CD pipeline.
Use when the user asks to "plan benefits realization", "define KPIs", "track success metrics", "establish benefits framework", or "measure project value delivery". Activates when a stakeholder needs to link deliverables to business outcomes, define measurable KPIs with targets, design post-project benefit tracking, create a benefits ownership matrix, or establish a sustainability plan for realized benefits.
BI and analytics service discovery — data maturity assessment (DCAM/DMM), dashboard landscape inventory, semantic layer evaluation, self-service analytics readiness, data literacy assessment, analytics use case portfolio, and BI transformation roadmap. Distinct from bi-architecture (design skill); this is the discovery/assessment for BI-as-a-service engagements. Use when the user asks to "assess BI maturity", "evaluate analytics capabilities", "dashboard inventory", "data literacy assessment", "semantic layer review", "self-service analytics readiness", "analytics use case prioritization", "BI transformation roadmap", or mentions BI-as-a-service, analytics maturity, dashboard consolidation, data democratization, DCAM, DMM, or data literacy.
BI solution design — semantic layers, dashboard patterns, self-service analytics, KPI frameworks. Use when the user asks to "design BI architecture", "build a KPI framework", "set up self-service analytics", "design dashboard hierarchy", "create a semantic layer", or mentions metric trees, drill-down patterns, or reporting strategy.
Generates branded DOCX (Word) documents using the MetodologIA Neo-Swiss Design System v6. Uses python-docx to create professional documents with navy headers, gold accents, Poppins headings, and Trebuchet MS body text. Use when the user requests a Word document, DOCX output, or when the ghost menu routes to DOCX.
Generates branded PPTX (PowerPoint) presentations using the MetodologIA Neo-Swiss Design System v6. Uses python-pptx to create slide decks with navy backgrounds, gold accents, Poppins titles, and Trebuchet MS body text. Use when the user requests a presentation, slide deck, PPTX output, or when the ghost menu routes to PPTX format.
Generates branded XLSX (Excel) spreadsheets using the MetodologIA Neo-Swiss Design System v6. Uses openpyxl to create professional spreadsheets with navy headers, gold accent rows, and semantic conditional formatting. Use when the user requests a spreadsheet, XLSX output, or when the ghost menu routes to XLSX.
Use when the user asks to "create a budget", "estimate costs", "define contingency reserves", "build cost breakdown structure", or "establish a cost baseline for EVM". Activates when a stakeholder needs to produce a cost baseline, aggregate bottom-up estimates, calculate contingency and management reserves, generate a time-phased budget with S-curve, or define cost accounts for earned value tracking.
Use when the user asks to "track budget", "monitor costs", "review budget variance", "check contingency burn", or "forecast remaining project costs". Activates when a stakeholder needs to analyze cost variances against baseline, monitor contingency reserve consumption, update budget forecasts, generate burn rate analysis, or produce corrective action recommendations for cost overruns.
Use when the user asks to "plan capacity", "forecast resource demand", "analyze resource availability", "match supply to demand", or "model resource scenarios". Activates when a stakeholder needs to analyze resource supply vs demand, identify capacity gaps, detect over-allocations, build time-phased capacity models, or plan proactive hiring and cross-training decisions before bottlenecks impact delivery.
Use when the user asks to "design ceremonies", "plan meeting cadence", "create facilitation guides", "define ceremony templates", or "optimize meeting calendar". Activates when a stakeholder needs to design a complete ceremony calendar, define time-boxes and agendas per ceremony, create facilitation guides, identify ceremony anti-patterns, or measure ceremony effectiveness across the project lifecycle.
Use when the user asks to "facilitate a ceremony", "run a retrospective", "lead sprint planning", "moderate a meeting", or "design facilitation techniques". Activates when a stakeholder needs facilitation guides for project ceremonies, engagement techniques for team workshops, conflict navigation protocols for heated discussions, anti-pattern recognition during ceremony execution, or ceremony effectiveness measurement.
Use when the user asks to "set up change control", "evaluate change requests", "manage scope changes", "establish CCB governance", or "process a change request". Activates when a stakeholder needs to establish a change control process, create change request templates, define CCB composition and decision criteria, evaluate change impact on scope/schedule/cost, or track change request trends across the project.
Organizational change readiness assessment producing readiness scorecard, resistance map, and intervention plan. Use when the user asks to "assess change readiness", "evaluate organizational readiness", "change impact analysis", "resistance mapping", "ADKAR assessment", "readiness scorecard", or mentions "Phase 5b", "adoption risk", "organizational capacity for change".
CLI interactivo de inicialización que configura el entorno del cliente, pre-puebla discovery/, ejecuta G0 security scan y prepara el contexto para discovery.
Use when the user asks to "audit PM tools visually", "inspect Jira configuration", "review Azure DevOps setup", "check Monday.com boards", or "evaluate tool configuration". Activates when a stakeholder needs to perform a visual audit of PM tool configurations, capture screenshot evidence of misconfigurations, compare tool setup against methodology best practices, identify workflow anti-patterns in PM tools, or produce a remediation roadmap for tool optimization.
Use when the user asks to "close the project", "generate closure report", "document final metrics", "perform administrative closure", or "obtain formal acceptance". Activates when a stakeholder needs to produce a project closure report, compare final actuals vs baseline, compile lessons learned, obtain formal sponsor acceptance, or execute administrative closure including resource release and documentation archiving.
Cloud migration planning -- 7R assessment, workload classification, wave planning, cutover. Use when the user asks to "plan cloud migration", "assess workloads for migration", "design landing zone", "create migration waves", "plan cutover strategy", or mentions 7R, rehost, replatform, refactor, lift-and-shift, or migration factory.
Cloud-native design -- containers, service mesh, serverless, multi-cloud, FinOps. Use when the user asks to "design cloud-native architecture", "containerize the application", "evaluate service mesh", "plan serverless migration", "implement multi-cloud strategy", "optimize cloud costs", or mentions Kubernetes, Istio, Docker, Helm, Terraform, FinOps, or 12-factor.
Cloud-as-a-Service discovery — cloud readiness assessment, DevOps maturity (DORA), cloud operations model, FinOps assessment, cloud security posture, and cloud services roadmap. Distinct from cloud-migration (which covers migration strategy); this covers Cloud as an ongoing service offering. Use when the user asks to "assess cloud operations", "evaluate DevOps maturity", "DORA assessment", "FinOps evaluation", "cloud security posture", "SRE maturity", "cloud operations model", "cloud service roadmap", or mentions cloud-as-a-service, platform engineering, toil reduction, FinOps, cloud cost optimization, or cloud operations.
Business model and value capture strategy — identifies optimal commercial structures for technology engagements beyond T&M. Use when the user asks to "define business model", "structure the deal", "identify value capture", "design pricing strategy", "explore commercial models", or mentions earned value, joint venture, revenue share, outcome-based, licensing model, or commercial structure.
Use when the user asks to "create a communication plan", "define communication matrix", "plan reporting cadence", "design stakeholder communications", or "establish escalation protocols". Activates when a stakeholder needs to design a communication matrix, define channel strategy, create reporting templates, establish escalation communication paths, or measure communication effectiveness across the project.
Competitive technical landscape analysis, technology differentiation assessment, build-vs-buy analysis, and market positioning evaluation. Use when the user asks to "analyze competition", "compare technology options", "build vs buy analysis", or mentions competitive matrix, differentiation map, or market positioning.
Regulatory and standards compliance assessment — GDPR, SOX, PCI-DSS, HIPAA, ISO 27001, NIST CSF. Use when the user asks to "evaluate compliance", "audit regulatory gaps", "assess GDPR readiness", "review PCI-DSS compliance", or mentions regulatory frameworks, data protection, compliance matrix.
Use when the user asks to "track compliance", "audit regulatory requirements", "verify compliance status", "prepare for regulatory audit", or "map compliance requirements". Activates when a stakeholder needs to catalog applicable regulations, map requirements to project activities, design evidence collection processes, track compliance gaps, or prepare documentation packages for external audits and certifications.
Use when the user asks to "resolve stakeholder conflict", "manage team conflict", "mediate disagreements", "navigate political disputes", or "de-escalate team tensions". Activates when a stakeholder needs to classify conflict types, apply resolution techniques, facilitate interest-based negotiation, build coalitions for alignment, or design structural prevention measures to avoid recurring conflicts.
Use when the user asks to "optimize context", "reduce token usage", "prune context window", "configure progressive loading", or "manage session state". Activates when a stakeholder needs to optimize context window usage, configure progressive MOAT loading levels, design intelligent pruning strategies, manage session state persistence, or implement token-efficient skill routing across the agent framework.
Use when the user asks to "plan contingencies", "create fallback plans", "define contingency reserves", "design trigger-response protocols", or "calculate schedule reserves". Activates when a stakeholder needs to develop fallback strategies for high-priority risks, calculate schedule and cost reserves from quantitative analysis, define trigger protocols for rapid contingency activation, or track reserve consumption over time.
Use when the user asks to "improve processes", "run a retrospective analysis", "implement kaizen", "optimize PDCA cycles", or "track improvement implementation". Activates when a stakeholder needs to identify improvement opportunities from project data, apply root cause analysis techniques, prioritize improvements by effort-impact ratio, implement PDCA cycles, or embed improvements into standard processes.
Persuasive writing for executive audiences — value propositions, calls to action, cost-of-inaction narratives, and compelling summaries. Use when generating executive summaries, pitch narratives, scenario value propositions, recommendation justifications, or any prose that must drive a decision.
Cost driver identification — effort inductors, scope drivers, magnitude estimation, team composition modeling, risk-adjusted timeline ranges, service engagement sizing, consulting effort, automation ROI, and staffing model. Use when the user asks to "estimate effort", "identify cost drivers", "size the project", "plan team composition", "identify effort inductors", or mentions WBS, sizing, contingency, burn rate, PERT, Monte Carlo, or "Phase 4" cost work. NEVER produces final prices — produces drivers, ranges, and magnitude indicators with costing disclaimers.
Use when the user asks to "calculate cost of delay", "run WSJF analysis", "prioritize by economic value", "quantify delay impact", or "sequence work by value". Activates when a stakeholder needs to quantify the economic cost of delaying features, apply Weighted Shortest Job First prioritization, transform subjective prioritization into data-driven economic sequencing, or perform sensitivity analysis on priority rankings.
Use when the user asks to "convert skills to Cursor", "export to Codex", "convert to Gemini format", "port skills to another AI platform", or "create multi-platform skills". Activates when a stakeholder needs to convert MOAT skills from Claude Code format to Cursor rules, GitHub Codex AGENTS.md, Google Gemini system instructions, or other AI coding assistant formats while preserving skill logic and evidence protocols.
Use when the user asks to "configure dashboards", "set up data feeds", "design monitoring tools", "automate dashboard updates", or "integrate PM data sources". Activates when a stakeholder needs to configure PM dashboard tooling, set up automated data feeds from PM tools, design visualization components, configure alert thresholds, or establish dashboard refresh cadence and access control.
Data pipeline architecture — ingestion, orchestration, quality, lineage, SLAs. Use when the user asks to "design data pipelines", "architect ingestion", "set up orchestration", "plan data lake", "design lakehouse", or mentions Airflow, Dagster, CDC, data lineage, or pipeline SLAs.
Data governance framework — catalog, ownership, classification, retention, privacy compliance, data mesh. Use when the user asks to "build a data catalog", "define data ownership", "classify sensitive data", "design retention policies", "ensure privacy compliance", "implement data mesh governance", or mentions GDPR, CCPA, LGPD, data stewardship, PII, data lineage, or federated governance.
Data mesh readiness assessment and strategy using Zhamak Dehghani's 4 principles. Use when the user asks to "assess data mesh readiness", "design data mesh strategy", "domain data ownership", "data as a product", "self-serve data platform", "federated data governance", "data mesh migration", or mentions "data decentralization", "data domain ownership", "data product thinking".
Data quality framework — profiling, validation, anomaly detection, data contracts, SLA monitoring. Use when the user asks to "design data quality framework", "set up data contracts", "plan data validation", "detect data anomalies", "define data SLAs", or mentions data profiling, quarantine patterns, or remediation workflows.
ML/AI system design — model lifecycle, feature stores, experiment tracking, model serving, MLOps pipelines. Use when the user asks to "design an ML system", "architect model serving", "set up experiment tracking", "design feature store", "plan MLOps pipeline", or mentions model registry, A/B testing, drift detection, or retraining triggers.
Transforms metrics and findings into meaningful narratives — insight extraction, metrics-to-meaning conversion, comparison framing, and magnitude communication. Use when presenting scoring matrices, coverage metrics, performance data, cost estimates, or any quantitative finding that needs interpretation and context.
Visual data narrative design — chart selection, Mermaid diagram metodologia-storytelling, visual hierarchy, dashboard narratives, and annotation strategy. Use when selecting chart types, designing diagram narratives, building visual sequences for presentations, or annotating data visualizations for maximum comprehension.
Database design — schema patterns, indexing, partitioning, replication, migration, performance tuning. Use when the user asks to "design the database schema", "plan indexing strategy", "set up replication", "partition large tables", "migrate database schema", "tune query performance", or mentions normalization, sharding, B-tree indexes, zero-downtime migration, or connection pooling.
Use when the user asks to "define Definition of Done", "set acceptance criteria", "establish DoD/DoR standards", "define quality standards", or "create completion checklists". Activates when a stakeholder needs to establish Definition of Done criteria at story/feature/release levels, create Definition of Ready checklists, design acceptance criteria templates, define exception handling processes, or plan DoD evolution protocols.
System and library dependency mapping, vulnerability scanning, upgrade risk assessment, and license compliance analysis. Use when the user asks to "map dependencies", "analyze dependency risk", "check license compliance", "assess upgrade risk", or mentions dependency graph, vulnerability scanning, or supply chain security.
Use when the user asks to "map dependencies", "visualize cross-project dependencies", "identify dependency risks", "detect circular dependencies", or "create dependency network diagrams". Activates when a stakeholder needs to catalog inter-project dependencies, visualize dependency networks, identify critical dependency chains, detect circular dependencies, or establish cross-project coordination protocols for dependency management.
MetodologIA branded design system — full-fidelity output templates for HTML, DOCX, XLSX, PPTX, and MD formats. Produces self-contained, accessible, production-ready deliverables in any format using the canonical MetodologIA Neo-Swiss Design System v6 tokens, components, and page templates. Use when generating branded outputs, converting between formats, creating HTML deliverables, building DOCX/PPTX/XLSX from markdown, or establishing brand compliance for any output.
Configurable design system for HTML deliverables with tokens, page structure, and component library. Use when the user asks to "apply design system", "generate styled HTML", "set up brand tokens", "configure brand colors", or mentions "design system", "design tokens", "component library", "brand config", "page template".
Developer experience (DX) platform assessment, inner loop optimization, toolchain evaluation, and onboarding friction analysis. Use when the user asks to "assess developer experience", "optimize inner loop", "evaluate toolchain", or mentions DX scorecard, developer productivity, or cognitive load reduction.
Use when the user asks to "align DevOps with PM", "bridge CI/CD with milestones", "integrate deployment pipelines with project tracking", "map DORA metrics to PM KPIs", or "design release-milestone binding". Activates when a stakeholder needs to map CI/CD pipelines to milestone tracking, align release cadences with sprint ceremonies, correlate DORA metrics with project KPIs, or design an integrated DevOps-PM operating model.
DevSecOps pipeline architecture — CI/CD design, shift-left security, supply chain integrity, release management, and compliance automation. Use when the user asks to "design the CI/CD pipeline", "integrate security into delivery", "set up SBOM and artifact signing", "automate compliance", "measure DORA metrics", or mentions SAST, SCA, DAST, secrets scanning, IaC scanning, canary deployment, or policy-as-code.
Program-level digital transformation discovery — digital maturity assessment, service portfolio mapping, program architecture, change readiness, multi-service integration, program governance, and transformation roadmap. Use when the user asks to "assess digital maturity", "plan digital transformation", "design transformation program", "evaluate change readiness", "map service portfolio", "program governance", "transformation roadmap", or mentions digital transformation, maturity assessment, multi-workstream, program architecture, change management, or transformation program.
DR/BCP planning — RPO/RTO definition, failover design, backup strategies, tabletop exercises. Use when the user asks to "plan disaster recovery", "define RPO/RTO", "design failover", "create BCP", or mentions business continuity, backup strategy, recovery runbook, tabletop exercise.
Discovery-to-execution handover — operational transition package, commercial activation, governance transfer, and Phase 1 kickoff plan. Use when the user asks to "create handover", "transition to operations", "prepare delivery handoff", "activate commercial proposal", "hand off discovery", "prepare operations package", "close discovery engagement", or mentions handover, transition, delivery kickoff, proposal preparation, or discovery close-out.
This skill should be used when the user asks to "run a discovery", "orchestrate the pipeline", "start a consulting engagement", "coordinate the dream team", "plan a discovery session", "manage discovery inputs", or mentions discovery orchestration, phase sequencing, quality gates, data contracts, expert committee, dream team, or consulting pipeline. Always use this skill as the entry point for any discovery engagement — it coordinates all other skills.
Use when the user asks to "run a project discovery retrospective", "review discovery outcomes", "assess discovery effectiveness", "calibrate pipeline parameters", or "measure discovery quality". Activates when a stakeholder needs to conduct a quantitative post-discovery review, measure pipeline execution quality, assess deliverable completeness, evaluate estimation accuracy, or update APEX pipeline parameters based on retrospective findings.
Doc-as-code strategy design, documentation taxonomy, content governance, and knowledge base architecture. Use when the user asks to "design documentation strategy", "build knowledge base", "create doc-as-code pipeline", or mentions documentation governance, content taxonomy, or technical writing standards.
Use when the user asks to "consult a methodology expert", "get methodology advice", "switch methodology perspective", "resolve a methodology debate", or "get framework-specific guidance". Activates when a stakeholder needs adaptive methodology guidance, framework-specific practice recommendations, methodology debate resolution, anti-pattern diagnosis and remediation, or contextual advice that shifts persona based on declared project methodology.
Context-adaptive industry expert that dynamically adopts the right SME lens based on client sector. Use when the user asks to "add industry context", "act as domain expert", "give me the banking/retail/health perspective", or mentions "SME", "subject matter expert", "industry lens", "sector analysis", "regulatory context".
Earned Value Management analysis — CPI, SPI, EAC forecasting, trend analysis, S-curve visualization. Use when the user asks to "run EVM analysis", "calculate CPI/SPI", "forecast EAC", "track earned value", "measure project performance", or mentions earned value management, CPI, SPI, EAC, ETC, TCPI, BAC, cost performance, schedule performance, variance analysis.
Use when the user asks to "create engagement strategy", "plan stakeholder engagement", "design influence approach", "manage stakeholder resistance", or "build coalition support". Activates when a stakeholder needs to design targeted engagement strategies, move stakeholders from current to desired engagement levels, build champion coalitions, analyze and respond to resistance, or track engagement effectiveness over time.
Enterprise architecture alignment — capability mapping, domain decomposition, governance, technology radar, and strategic initiative roadmap. Use when the user asks to "map business capabilities", "build a technology radar", "define architecture governance", "prioritize strategic initiatives", "design team topologies", or mentions DDD domains, ARB, DORA metrics, maturity models, or target operating model.
Event-driven architecture — event catalog, schema registry, eventual consistency, saga, CQRS, event sourcing. Use when the user asks to "design event-driven system", "build event catalog", "implement CQRS", "design saga patterns", "set up schema registry", "implement event sourcing", or mentions Kafka, RabbitMQ, Pulsar, event bus, dead-letter queue, consumer groups, or event replay.
Execution tracking with 1-day sprints per developer, burndown charts (Atlassian-style), velocity tracking using the MetodologIA productivity model (1 FTE = 1 shippable feature/day from Sprint 2). Sprint 1 = onboarding. Produces burndown dashboards, velocity reports, and completion projections. Use when dimensioning execution effort, tracking delivery velocity, creating burndown projections, or when "burndown", "velocity", "sprints diarios", "1 feature por día", or "tracking de ejecución" is mentioned.
Use when the user asks to "prepare executive summary", "brief the sponsor", "create sponsor update", "write C-level presentation", or "produce steering committee report". Activates when a stakeholder needs to produce decision-focused executive briefings, distill complex project data into 5-minute reads, present RAG status with strategic alignment, frame decisions with options and recommendations, or prepare steering committee materials.
Use when the user asks to "create executive dashboard", "build C-level view", "design KPI dashboard", "produce management dashboard", or "configure portfolio health view". Activates when a stakeholder needs to design a C-level dashboard showing project/portfolio health, select and configure KPI visualizations, create drill-down capability for areas of concern, or establish dashboard refresh cadence and governance.
C-level executive pitch with financial modeling and persuasion architecture. Use when the user asks to "create a pitch", "build a business case", "justify the investment", "present to executives", "ROI analysis", "executive summary", or mentions "C-level presentation", "budget approval", "NPV", "IRR", "payback period", "business case", "Phase 5b".
Use when the user asks to "build a business case", "calculate NPV", "analyze ROI", "run cost-benefit analysis", or "produce financial justification for a project". Activates when a stakeholder needs to produce a financial business case with NPV/IRR/payback analysis, build discounted cash flow models, perform sensitivity analysis on key assumptions, model best/most-likely/worst-case scenarios, or present go/no-go financial recommendations.
Cloud financial operations assessment and strategy using FinOps Foundation framework. Use when the user asks to "assess cloud costs", "optimize cloud spending", "FinOps assessment", "cloud cost analysis", "rightsizing", "reservation strategy", "showback/chargeback model", "cloud unit economics", "cost allocation", or mentions "cloud financial management", "cost optimization", "FinOps maturity".
DDD domain taxonomy + 8-12 end-to-end business flows with trama specifications, process mapping, service flow documentation, and operational flow tracing. Use when the user asks to "map flows", "document business processes", "trace integrations", "identify failure points", "domain mapping", "DDD analysis", or mentions "Phase 2", "flow mapping", "integration matrix", "dependency graph", "swimlane diagrams", "business process documentation".
Comprehensive functional specification with use cases, business rules, and complexity/risk matrix, service specification, deliverable specification, and engagement spec. Use when the user asks to "write functional specs", "document use cases", "define business rules", "create requirements", "specification document", or mentions "Phase 5a", "functional specification", "MVP scope", "acceptance criteria", "casos de uso", "reglas de negocio".
Functional analysis toolkit with 6 tools for requirements engineering. Use when the user asks to "run event storming", "create a story map", "extract business rules", "write acceptance criteria", "build traceability matrix", "detect anti-patterns", or mentions "Given/When/Then", "functional toolbelt", "requirements quality".
Use when the user asks to "request funding", "justify budget", "prepare investment proposal", "build capital request", "draft funding justification", or mentions funding request, budget justification, capital request, investment proposal. Triggers on: prepares an investment case, drafts a budget approval package, justifies project funding, builds a capital expenditure request, creates a funding drawdown schedule.
Generative AI architecture — RAG patterns, LLM orchestration, multi-model tiering, agent workflow design, vector database architecture, knowledge connectors, and GenAI quality assurance. This skill should be used when the user asks to "design RAG architecture", "architect LLM system", "select vector database", "design AI agents", "implement knowledge retrieval", "plan GenAI quality", or mentions RAG, embeddings, vector search, LLM orchestration, agent framework, context-aware generation, hallucination reduction, or multi-model routing.
Use when the user asks to "define governance", "create governance model", "set up escalation paths", "design authority matrix", "establish decision framework", or mentions project governance, steering committee, decision framework, authority levels, escalation matrix. Triggers on: builds a governance charter, designs escalation paths, defines decision-making authority, creates steering committee structure, maps authority levels.
---
Use when the user asks to "assess hybrid methodology readiness", "evaluate hybrid approach", "check hybrid methodology maturity", "measure integration capability", "diagnose water-scrum-fall", or mentions hybrid assessment, hybrid readiness, mixed methodology evaluation, iterative-sequential integration maturity. Triggers on: evaluates hybrid methodology maturity, detects hybrid anti-patterns, scores integration capability, assesses dual-governance readiness, produces hybrid adoption roadmap.
Use when the user asks to "design a hybrid approach", "combine agile and waterfall", "create hybrid methodology", "integrate iterative and sequential delivery", "build adaptive lifecycle", or mentions hybrid PM, water-scrum-fall, bimodal, agile-traditional blend, adaptive lifecycle. Triggers on: designs a hybrid methodology, maps components to delivery approaches, creates interface agreements between agile and waterfall, unifies governance across methodologies, blends iterative and predictive planning.
Use when the user asks to "test a hypothesis", "validate assumptions through delivery", "run experiment-driven project", "design build-measure-learn cycles", "validate project assumptions", or mentions hypothesis-driven delivery, HDD, validated learning, experiment design, build-measure-learn. Triggers on: converts assumptions into testable hypotheses, designs minimum viable experiments, facilitates pivot-or-persevere decisions, documents validated learning, ranks hypotheses by risk and impact.
Hypothesis-Driven Development (HDD) framework for structuring modernization proposals as testable hypotheses with Lean Startup cycles (Build-Measure-Learn). Transforms features into hypotheses with metrics, experiments, and kill/pivot/persevere thresholds. Use when formulating scenarios as hypotheses, designing validation experiments, applying Lean Startup to discovery, or when "HDD", "hypothesis", "hipótesis", "lean startup", "build-measure-learn", "experiment", "kill/pivot/persevere", or "validación de hipótesis" is mentioned.
Incident response framework — severity classification, escalation paths, postmortem templates. Use when the user asks to "design incident process", "define severity levels", "create escalation paths", "build postmortem template", or mentions incident response, on-call, war room, blameless postmortem.
Infrastructure and platform architecture — compute, network, storage, HA/DR, IAM, cloud landing zones, and cost optimization. Use when the user asks to "design cloud infrastructure", "plan network topology", "define HA/DR strategy", "set up cloud landing zones", "optimize cloud costs", or mentions VPC, Kubernetes, serverless, multi-AZ, IAM, reserved instances, or chaos testing.
Use when the user asks to "analyze project inputs", "process documents", "extract requirements", "review project brief", "parse RFP content", or mentions input processing, document analysis, requirement extraction, project brief analysis. Triggers on: analyzes project input documents, extracts structured requirements from briefs, detects contradictions in source documents, normalizes project inputs for planning, produces input completeness scorecard.
System integration patterns — point-to-point, ESB, iPaaS, event mesh, API contract management, data mapping. Use when the user asks to "design integrations", "map system connections", "define API contracts", "plan event-driven integration", or mentions ESB, iPaaS, MuleSoft, API gateway, event mesh, data mapping.
Use when the user asks to "plan integration", "map cross-project dependencies", "define interface agreements", "coordinate between projects", "manage cross-team dependencies", or mentions integration management, cross-project coordination, interface contracts. Triggers on: maps integration points between components, defines interface data contracts, creates dependency matrices, designs cross-project coordination protocols, produces integration verification checklists.
Use when the user asks to "track issues", "manage project issues", "resolve blockers", "create issue log", "remove impediments", or mentions issue tracking, issue resolution, blocker management, impediment removal, issue escalation. Triggers on: creates issue tracking workflow, assigns issue resolution owners, enforces resolution SLAs, captures root cause analysis, produces issue trend analysis.
Use when the user asks to "configure Jira", "set up Azure DevOps", "design PM tool workflows", "create board configuration", "map tool to methodology", or mentions Jira configuration, Azure DevOps setup, PM tool setup, workflow design, board configuration. Triggers on: designs PM tool project structure, creates workflow state machines, configures board columns and swimlanes, maps methodology ceremonies to tool features, produces tool user guides.
Use when the user asks to "assess Kanban maturity", "evaluate Kanban practices", "check flow efficiency", "measure WIP discipline", "diagnose Kanban health", or mentions Kanban assessment, Kanban maturity, flow metrics evaluation, WIP limit assessment, Kanban readiness. Triggers on: scores Kanban maturity against KMM levels, evaluates flow health metrics, assesses WIP limit enforcement, detects Kanban anti-patterns, produces evolutionary improvement roadmap.
Use when the user asks to "design a Kanban board", "set WIP limits", "improve flow", "measure lead time", "optimize throughput", or mentions Kanban, flow metrics, cumulative flow diagram, pull system, WIP limits, cycle time. Triggers on: designs Kanban board layout, calculates initial WIP limits, defines pull policies, establishes flow measurement framework, produces Kanban system design document.
Use when the user asks to "prepare kickoff", "create kickoff deck", "plan team alignment", "design project launch meeting", "build team charter", or mentions kickoff package, project kickoff, team alignment meeting, ground rules, team charter. Triggers on: creates kickoff presentation deck, designs team alignment agenda, facilitates ground rules agreement, produces communication quick-reference, compiles team charter from kickoff outcomes.
Use when the user asks to "implement LeSS", "set up Nexus", "scale Scrum for multiple teams", "coordinate multi-team delivery", "unify product backlog across teams", or mentions LeSS, Nexus, multi-team Scrum, cross-team coordination, integrated increment. Triggers on: designs multi-team Scrum scaling, configures shared product backlog, establishes cross-team coordination events, creates integration strategy for 2-8 teams, produces scaling metrics dashboard.
Use when the user asks to "capture lessons learned", "document project lessons", "build lessons register", "create knowledge base from project experience", "extract reusable insights", or mentions lessons learned, knowledge capture, lessons register, project learning, organizational memory. Triggers on: captures lessons from retrospectives, categorizes lessons by domain, creates searchable lessons register, distributes knowledge to future projects, rates lesson impact.
Management and consulting discovery — PMO maturity assessment, methodology fitness evaluation, team capability analysis, governance model assessment, delivery performance baseline, Factor WOW assessment, and management transformation roadmap. Use when the user asks to "assess PMO maturity", "evaluate project management practices", "management discovery", "methodology assessment", "governance evaluation", "delivery performance analysis", "Factor WOW assessment", "management transformation", "agile maturity", "SAFe assessment", "PMO setup", or mentions "Disciplined Agile", "delivery excellence", "management consulting", "project governance", "ceremony health".
Use when the user asks to "assess PM maturity", "evaluate project management capability", "run OPM3 assessment", "check P3M3 level", "benchmark organizational PM capability", or mentions PM maturity, organizational PM maturity, OPM3, P3M3, project management maturity model, PM capability assessment. Triggers on: scores PM maturity against established frameworks, produces capability heat maps, identifies improvement priorities, creates strategic maturity roadmap, benchmarks against industry standards.
Mentoring and training discovery — capability assessment, learning path design, knowledge transfer planning, training delivery model, measurement framework, and training roadmap. Use when the user asks to "assess training needs", "design learning paths", "plan knowledge transfer", "evaluate mentoring program", "training gap analysis", "capability assessment", "upskilling plan", or mentions "training discovery", "mentoring readiness", "talent development", "MetodologIA University".
This skill should be used when the user asks to "create diagrams", "generate Mermaid", "visualize architecture", "diagram flows", "draw a sequence diagram", "create a C4 diagram", "add visual diagrams", or mentions diagramming, visualization, flowcharts, sequence diagrams, Mermaid syntax, architecture diagrams, or visual documentation. Use this skill to embed precise, syntactically valid Mermaid diagrams in any discovery deliverable.
Use when the user asks to "assess methodology fit", "select PM methodology", "evaluate agile vs waterfall", "determine best approach", "score methodology options", or mentions methodology selection, framework comparison, agile readiness, approach evaluation. Triggers on: evaluates project characteristics against methodology criteria, produces weighted scoring matrix, recommends best-fit methodology with confidence level, identifies organizational readiness gaps, generates tailoring guidance.
Use when the user asks to "create a methodology playbook", "define project ceremonies", "design cadences and rituals", "build a Definition of Done", "operationalize methodology", or mentions methodology playbook, ceremony design, cadence definition, methodology selection, DoD, project rituals. Triggers on: codifies methodology into actionable playbook, designs ceremonies with agendas and durations, creates Definition of Done per deliverable type, maps roles to ceremonies, produces ceremony calendar.
Detailed migration execution guide — strangler fig, parallel run, big bang, rollback procedures, data migration. Use when the user asks to "plan migration", "design cutover", "build migration playbook", "define rollback strategy", or mentions strangler fig, parallel run, data migration, legacy modernization.
Mini Apps and Low-Code discovery — citizen developer readiness, platform assessment (Power Platform, OutSystems, Mendix, Retool), use case identification and prioritization, governance model, integration architecture, and low-code adoption roadmap. Use when the user asks to "evaluate low-code platforms", "assess citizen developer readiness", "mini apps strategy", "Power Platform assessment", "low-code governance", "no-code evaluation", "automation apps discovery", or mentions "citizen development", "low-code adoption", "mini apps".
Mobile app architecture -- native vs cross-platform, offline-first, state management, release management. Use when the user asks to "design mobile architecture", "choose between native and cross-platform", "implement offline-first", "plan mobile CI/CD", "optimize app performance", or mentions Flutter, React Native, KMP, MVVM, SwiftUI, Jetpack Compose, or app store deployment.
AS-IS assessment for mobile apps -- performance, compliance, dependency health, UX metrics. Use when the user asks to "assess the mobile app", "audit app health", "review app dependencies", "check app store compliance", "measure app performance", or mentions crash rate, ANR, app size, cold start time, or mobile tech debt.
Unified mobile platform assessment — merges former mobile-architecture and mobile-assessment into one skill. Covers cross-platform vs native strategy, store compliance, app vitals, architecture patterns, offline-first design, performance optimization, dependency health, and remediation roadmaps. Use when the user asks to "assess mobile architecture", "evaluate mobile platform", "audit app health", "choose between native and cross-platform", "check store compliance", "optimize mobile performance", "review app dependencies", or mentions Flutter, React Native, KMP, MVVM, crash rate, ANR, app size, cold start time, or mobile tech debt.
Use when the user asks to "run Monte Carlo", "simulate schedule risk", "probabilistic cost analysis", "confidence intervals", "forecast completion probability", or mentions Monte Carlo simulation, probabilistic analysis, schedule confidence, cost confidence. Triggers on: executes probabilistic schedule simulation, generates cost confidence curves, calculates contingency reserves from P-values, identifies sensitivity drivers via tornado diagram, produces S-curves with confidence levels.
Deep feasibility validation across 7 dimensions by a Council of Seven Sages (Think Tank). Postdoctoral-level research rigor applied to scenario validation. Validates technical claims, quantitative assumptions, systemic risks, technology maturity, infrastructure limits, integration feasibility, and economic viability. Use when validating approved scenarios before roadmap commitment, when stakeholders need confidence in technical achievability, or when "Phase 3b" / "feasibility" / "think tank" / "7 sabios" is mentioned.
Observability architecture — logging, tracing, metrics, alerting, SLO/SLI, incident response. Use when the user asks to "design observability", "set up monitoring", "implement tracing", "configure alerting", "define SLOs", "design incident response", or mentions OpenTelemetry, Prometheus, Grafana, ELK, correlation IDs, burn rate, runbooks.
Use when the user asks to "create onboarding plan", "plan knowledge transfer", "design team onboarding", "reduce ramp-up time", "capture institutional knowledge", or mentions onboarding playbook, knowledge transfer, new team member ramp-up, team integration. Triggers on: creates role-specific onboarding paths, designs knowledge transfer sessions, establishes buddy system, defines ramp-up milestones, captures institutional knowledge for preservation.
Use when the user asks to "manage positive risks", "exploit opportunities", "enhance project benefits", "capture upside potential", "optimize project outcomes", or mentions positive risk, opportunity exploitation, opportunity enhancement, upside risk management. Triggers on: identifies upside potential in project execution, applies exploit/share/enhance/accept strategies, quantifies opportunity value, integrates opportunity actions into project plan, tracks opportunity realization.
Use when the user asks to "plan change management", "implement ADKAR", "manage organizational change", "plan adoption", "design resistance management", or mentions OCM, organizational change management, ADKAR, change readiness, adoption planning. Triggers on: designs ADKAR-based change interventions, assesses change readiness, creates communication campaigns, builds training plans, manages resistance through structured interventions, measures adoption KPIs.
Use when the user asks to "produce deliverables", "convert formats", "generate multi-format output", "apply naming conventions", "manage deliverable pipeline", or mentions output engineering, format conversion, deliverable production, markdown to HTML, multi-format generation. Triggers on: converts markdown to branded HTML, applies evidence tagging to deliverables, enforces naming conventions, manages version tagging (WIP/Aprobado), produces deliverables in multiple formats simultaneously.
Performance assessment — load testing, capacity planning, bottleneck analysis, caching, CDN, SLAs. Use when the user asks to "analyze performance", "design load tests", "plan capacity", "optimize caching", "configure CDN", "define SLAs", "find bottlenecks", or mentions latency, throughput, p95, saturation, cache hit ratio, edge compute.
Discovery pipeline governance — phase gate management, resource orchestration, dependency control, and proposal QA validation across the entire discovery pipeline. Replaces former project-program-management (not generic PM, specific to discovery pipeline). Use when the user asks to "track the discovery", "govern the pipeline", "validate the proposal", "run governance check", "check phase dependencies", "coordinate resources", or mentions pipeline governance, phase gates, proposal readiness, milestone tracking, or cross-phase dependency management. Works as the structural glue that holds the entire discovery pipeline together — from Phase 0 through Handover.
Use when the user asks to "assess PMO effectiveness", "evaluate PMO value", "review PMO structure", "measure PMO impact", "audit PMO services", or mentions PMO assessment, PMO evaluation, PMO capability review, PMO performance assessment, PMO value assessment. Triggers on: evaluates PMO operating model effectiveness, measures stakeholder value perception, assesses PMO service catalog maturity, quantifies PMO impact on project success rates, produces PMO transformation roadmap.
Use when the user asks to "check PMO health", "run PMO health check", "diagnose PMO performance", "audit PMO operations", "measure PMO KPIs", or mentions PMO health, PMO diagnostics, PMO pulse check, PMO operational review, PMO internal audit. Triggers on: conducts 8-dimension PMO health check, compiles RAG health dashboard, identifies corrective actions for underperforming dimensions, tracks quarter-over-quarter trends, produces PMO operational improvement plan.
Use when the user asks to "assess PMO maturity", "evaluate PM maturity", "run OPM3 assessment", "P3M3 assessment", "benchmark PMO capability", or mentions PMO maturity model, organizational PM maturity, capability maturity, OPM3, P3M3. Triggers on: scores PMO maturity against OPM3 or P3M3 frameworks, produces maturity radar charts, identifies improvement priorities by strategic impact, designs multi-year maturity roadmap, estimates improvement investment in FTE-months.
Use when the user asks to "set up a PMO", "design PMO", "create PMO charter", "implement project management office", "define PMO operating model", or mentions PMO design, project management office, PMO operating model, PMO governance, PMO implementation. Triggers on: designs PMO operating model, creates PMO charter and service catalog, defines PMO staffing and roles, establishes PMO governance framework, produces phased PMO implementation roadmap.
Use when the user asks to "run a PoC", "prototype a solution", "test a tool", "evaluate methodology feasibility", "compare vendor options", or mentions proof of concept, PoC, prototype, tool evaluation, methodology pilot, controlled experiment. Triggers on: designs controlled PoC experiments, defines measurable success criteria, creates evaluation frameworks for tool comparison, facilitates evidence-based go/no-go decisions, documents scale-up risks.
Use when the user asks to "assess portfolio management maturity", "evaluate portfolio governance", "review portfolio practices", "benchmark portfolio capability", "score portfolio management", or mentions portfolio assessment, portfolio maturity, portfolio management capability, portfolio governance evaluation. Triggers on: assesses portfolio management maturity across 6 dimensions, evaluates strategic alignment effectiveness, reviews prioritization model quality, quantifies portfolio governance gaps, produces portfolio improvement roadmap.
Use when the user asks to "create portfolio dashboard", "report portfolio status", "generate portfolio heatmap", "build executive portfolio view", "aggregate project metrics", or mentions portfolio reporting, portfolio view, portfolio metrics, multi-project dashboard. Triggers on: aggregates project health into portfolio heatmap, produces resource utilization views, creates budget rollup summaries, visualizes risk concentration across portfolio, generates governance action items for steering committee.
Use when the user asks to "prioritize projects", "score portfolio", "rank investments", "build scoring model", "optimize portfolio mix", or mentions portfolio prioritization, scoring models, strategic alignment scoring, portfolio ranking, investment prioritization. Triggers on: builds weighted scoring models for project prioritization, calculates efficient frontier for portfolio optimization, runs sensitivity analysis on rankings, facilitates data-driven investment decisions, produces ranked portfolio with transparent scoring.
Use when the user asks to "assess portfolio risk", "aggregate project risks", "analyze portfolio risk exposure", "detect risk concentration", "model systemic risk", or mentions portfolio risk, aggregated risk, risk concentration, systemic risk, portfolio risk management. Triggers on: aggregates risk exposure across project portfolio, identifies correlated risks across projects, detects vendor/technology/resource concentration, models portfolio-level risk scenarios, produces portfolio risk heatmap for governance.
Use when the user asks to "forecast project completion", "predict cost overrun", "model risk probability", "run Monte Carlo on schedule", "generate confidence intervals", or mentions predictive analytics, ML forecasting, schedule prediction, cost forecasting, risk materialization prediction. Triggers on: produces probabilistic schedule forecasts, calculates cost-at-completion with confidence ranges, models risk materialization probability, identifies early warning indicators, generates P50/P80/P95 confidence intervals.
Use when the user asks to "plan procurement", "make-or-buy analysis", "generate RFP", "evaluate vendors", "define vendor criteria", "select contract type", or mentions procurement, sourcing, contract types, vendor selection, outsourcing decisions. Triggers on: produces make-or-buy decision matrices, drafts RFP templates with evaluation scorecards, recommends contract types per procurement item, creates procurement timelines, designs vendor evaluation criteria.
Product roadmap prioritization, backlog strategy, value stream mapping, product-market fit validation. Use when the user asks to "define product roadmap", "prioritize backlog", "map value streams", "validate product-market fit", or mentions product vision, RICE scoring, opportunity trees, dual-track agile.
Use when the user asks to "manage a program", "coordinate multiple projects", "track program benefits", "align program governance", "consolidate program risks", or mentions program management, multi-project coordination, program benefits, program governance, cross-project dependencies, benefits realization tracking.
Use when the user asks to "create a project charter", "define project objectives", "build a business case", "document success criteria", "formalize project authorization", or mentions charter, project initiation, sponsor approval, project justification, SMART objectives, project kickoff document.
Use when the user asks to "check project health", "run health assessment", "evaluate project status", "generate RAG scorecard", "diagnose project problems", or mentions health check, project diagnostics, RAG status, project vital signs, project wellness, leading indicator assessment.
This skill should be used when the user asks to "run a project pipeline", "orchestrate PM workflow", "start a project engagement", "coordinate the PM team", "plan a project lifecycle", "manage project inputs", "sequence project phases", or mentions project orchestration, phase sequencing, quality gates, data contracts, expert committee, PMO pipeline, consulting engagement. Always use this skill as the entry point for any PMO-APEX engagement.
PMO governance backbone — portfolio tracking, phase gate management, resource orchestration, dependency control, and proposal QA validation across the entire discovery pipeline. Use when the user asks to "track the discovery", "manage the portfolio", "validate the proposal", "run governance check", "check phase dependencies", "coordinate resources", or mentions PMO, program management, portfolio governance, phase gates, proposal readiness, milestone tracking, or cross-phase dependency management. Works as the structural glue that holds the entire discovery pipeline together — from Phase 0 through Handover.
Use when the user asks to "evaluate project feasibility", "decide go/no-go", "assess project viability", "screen project proposals", "prioritize project investments", or mentions project selection, feasibility gate, go/no-go decision, project screening, investment gate, weighted scoring model.
QA-as-a-Service discovery — quality maturity assessment (TMMi), test coverage analysis, tool landscape evaluation, PITT methodology alignment, team composition modeling, test factory design, and QA transformation roadmap. Use when the user asks to "assess QA maturity", "evaluate testing practices", "QA service discovery", "test factory design", "TMMi assessment", "QA transformation", "testing maturity evaluation", "PITT methodology", "QA team composition", "test automation assessment", "quality engineering assessment", or mentions "independent testing", "QA-as-a-Service", "test industrialization", "ISTQB".
Use when the user asks to "audit quality", "verify compliance", "review quality processes", "inspect deliverable conformance", "check regulatory adherence", or mentions quality audit, compliance verification, process audit, quality review, non-conformance assessment, corrective action planning.
Strategic quality engineering framework covering test strategy, automation architecture, quality gates, metrics, and shift-left practices. Use when the user asks to "design test strategy", "plan quality gates", "set up test automation", "assess quality maturity", "define quality metrics", or mentions "test pyramid", "shift-left", "CI/CD quality", "automation architecture", "quality engineering".
Use when the user asks to "create a quality plan", "define QA processes", "establish quality metrics", "design quality control activities", "set acceptance criteria", or mentions quality management, QA plan, quality assurance, quality control, quality standards, continuous quality improvement.
Use when the user asks to "create a RACI matrix", "define responsibilities", "assign decision rights", "clarify roles", "map accountability", or mentions RACI, responsibility assignment, accountability matrix, decision rights, RASCI, role ambiguity resolution, authority mapping.
Release management approach design, deployment pattern selection (blue-green, canary, rolling), and rollback procedure definition. Use when the user asks to "design release strategy", "define deployment patterns", "plan rollback procedures", or mentions trunk-based development, GitFlow, feature flags, or CI/CD pipeline strategy.
Use when the user asks to "render to PNG", "convert to PDF", "export Mermaid diagrams", "generate printable deliverables", "create branded exports", or mentions rendering engine, Mermaid-to-PNG, markdown-to-PDF, format rendering, export engine, visual format conversion.
Use when the user asks to "optimize resources", "level resources", "run what-if scenarios", "resolve over-allocations", "balance resource demand", or mentions resource leveling, resource smoothing, optimization, what-if analysis, resource allocation optimization, capacity balancing.
Use when the user asks to "plan resources", "allocate team", "create RACI", "define team structure", "capacity planning", "staff the project", or mentions resource allocation, team roles, staffing, organizational chart, responsibility matrix, resource histogram, capacity management.
Use when the user asks to "run a retrospective", "facilitate a retro", "conduct Start-Stop-Continue", "run a 4Ls retro", "facilitate a Sailboat retro", "analyze sprint improvement data", or mentions retrospective engine, structured retrospective, sprint retro, team reflection, improvement commitment tracking.
Use when the user asks to "define risk appetite", "set risk tolerance", "establish risk thresholds", "calibrate organizational risk levels", "create risk acceptance criteria", or mentions organizational risk tolerance, risk appetite statement, risk capacity, risk attitude, risk threshold matrix.
Proactive risk controller and financial vigilance — operates as an anxious CPA/PM hybrid that anticipates worst-case scenarios at every discovery step, stress-tests assumptions, tracks risk exposure, and feeds better insights back into each phase. Use when the user asks to "assess risks", "stress-test the plan", "validate assumptions", "run worst-case analysis", "check what could go wrong", "audit the discovery", or mentions risk register, contingency planning, assumption validation, exposure analysis, risk appetite, worst-case scenarios, financial controls, or "what keeps you up at night". The paranoid voice that makes the discovery reliable and the proposal trustworthy.
Use when the user asks to "monitor risks", "track risk triggers", "update risk dashboard", "review risk status", "assess risk response effectiveness", or mentions risk monitoring, risk tracking, trigger tracking, risk dashboard, risk escalation, emerging risk detection.
Use when the user asks to "quantify risks", "run Monte Carlo", "calculate EMV", "perform sensitivity analysis", "estimate contingency reserves", or mentions risk quantification, expected monetary value, decision tree, tornado diagram, probabilistic analysis, confidence intervals.
Risk register creation and identification — probability/impact assessment, RBS categorization, risk ownership. Use when the user asks to "create a risk register", "identify risks", "categorize risks", "build risk list", "assess project risks", or mentions risk identification, risk categorization, RBS, risk breakdown structure, risk inventory, probability-impact matrix, risk scoring.
Use when the user asks to "plan risk responses", "create mitigation strategies", "define risk treatments", "design contingency plans", "assign risk owners", or mentions risk mitigation, risk transfer, risk avoidance, risk acceptance, response strategies, trigger-response mapping.
Execution roadmap generator with sprint breakdown, prerequisites, gates, team/budget, and risk register. Use when the user asks to "create a roadmap", "plan a PoC", "build sprint plan", "execution timeline", or mentions "proof of concept", "MVP plan", "milestones", "sprint breakdown", "iteracion", "go/no-go".
RPA and process automation discovery — process landscape assessment, automation opportunity scoring, bot design architecture, platform evaluation, process mining, ROI projection, and automation roadmap. Use when the user asks to "evaluate RPA readiness", "assess automation opportunities", "process automation discovery", "bot architecture design", "RPA platform comparison", "automation roadmap", "process mining analysis", "identify automation candidates", "RPA ROI analysis", or mentions "robotic process automation", "attended/unattended bots", "automation CoE", "process digitization".
Use when the user asks to "assess SAFe maturity", "evaluate SAFe implementation", "check SAFe readiness", "audit ART health", "measure business agility", or mentions SAFe assessment, SAFe maturity, SAFe adoption evaluation, ART readiness, SAFe implementation review, SAFe competency radar.
Use when the user asks to "implement SAFe", "plan a PI", "set up an ART", "design value streams", "configure portfolio Kanban", or mentions SAFe, PI Planning, Agile Release Train, portfolio Kanban, value stream mapping, program increment, scaled agile implementation.
Evaluates 3+ modernization scenarios using Tree of Thought with 6-dimension weighted scoring. Use when the user asks to "compare scenarios", "evaluate options", "run scenario analysis", "Tree of Thought", "which approach should we take", "compare architectures", or mentions "Phase 3", "strategic analysis", "trade-off analysis", "SWOT comparison".
Use when the user asks to "create a schedule", "build a Gantt chart", "define critical path", "plan milestones", "establish timeline", "estimate durations with PERT", or mentions scheduling, dependencies, float, lead/lag, fast-tracking, crashing, schedule baseline, 3-point estimation.
Use when the user asks to "create a WBS", "decompose scope", "define work breakdown structure", "document scope statement", "set project boundaries", "identify deliverables", or mentions scope definition, deliverable decomposition, work packages, scope baseline, exclusions, 100% rule.
Use when the user asks to "implement Scrum", "plan sprints", "define ceremonies", "set up Scrum artifacts", "design sprint cadence", or mentions Scrum, sprint planning, daily standup, sprint review, retrospective, product backlog, sprint backlog, Definition of Done, velocity tracking.
Use when the user asks to "scan for secrets", "detect credentials", "sanitize sensitive data", "check for exposed passwords", "run security gate G0", or mentions secret detection, credential scanning, security gate G0, sensitive data masking, API key exposure, token detection.
Industry/sector intelligence analysis — context-adaptive expert that provides sector-specific insights, regulatory context, benchmarks, and risk overlays. Replaces former dynamic-sme. Use when the user asks to "add industry context", "analyze sector", "give me the banking/retail/health perspective", or mentions "sector intelligence", "industry analysis", "industry lens", "sector analysis", "regulatory context".
Security architecture design — threat modeling, zero trust, identity, encryption, compliance. Use when the user asks to "design security architecture", "model threats", "implement zero trust", "design IAM", "plan encryption strategy", "map compliance requirements", or mentions STRIDE, OWASP, OAuth, RBAC, SOC2, ISO27001, PCI-DSS.
Use when the user asks to "analyze skills gaps", "assess team capabilities", "plan training", "evaluate competency readiness", "identify capability shortfalls", or mentions skills inventory, capability assessment, competency gap, training needs analysis, skill proficiency mapping.
SLO/SLA/SLI definition — error budget policies, reliability targets, customer-facing commitments. Use when the user asks to "define SLAs", "design SLOs", "set reliability targets", "create error budget policy", or mentions SLI, service level, uptime, nines, error budgets.
BPMN 2.0 process modeling and analysis skill for AS-IS/TO-BE business process documentation, bottleneck identification, automation opportunity assessment, process maturity scoring, and process improvement design. Use whenever the user mentions process mapping, BPMN, business process, process flow, swimlane, AS-IS process, TO-BE process, process improvement, operational workflow, delivery monitoring, process maturity, or needs to model how work flows through an organization. Especially relevant for SAP fit-to-standard workshops, IT services company operations, and service variant analysis. Also trigger for RACI assignment, automation ROI, or compliance audit trail. Trigger: BPMN, process mapping, AS-IS TO-BE, process flow, swimlane, process maturity, automation ROI, RACI, fit-to-standard, process improvement, operational workflow.
Regional finance and accounting standards skill covering Colombia (NIIF/DIAN/CTC), Ecuador (SRI/USD dollarization), Mexico (SAT/CFDI), United States (GAAP/ASC 606), Spain (AEAT/SII), and pan-Americas considerations. Use whenever the user mentions financial regulations, tax compliance, electronic invoicing, transfer pricing, CTC calculation, intercompany billing, multi-currency management, localization requirements, withholding taxes, or labor cost structures for IT services companies operating across these regions. Essential for SAP localization configuration and fit-to-standard financial workshops. Also trigger when discussing cost-vs-sale segregation, Activity Type cost rates, margin visibility, arm's length pricing, or any cross-border billing. Trigger: CTC calculation, transfer pricing, intercompany billing, tax compliance, e-invoicing, SAP localization, withholding taxes, Activity Type rates, margin visibility.
SAP S/4HANA implementation skill covering module selection (CO, SD, PS, FI, HCM), SAP Activate methodology, fit-to-standard workshops, multi-country localization, intercompany configuration, and professional services industry patterns. Use whenever the user mentions SAP, S/4HANA, SAP implementation, fit-to-standard, SAP modules, SAP localization, SAP migration, ERP implementation, or needs guidance on SAP configuration for IT services companies. Also trigger for SAP-specific gap analysis, SAP scope definition, SAP best practices, CATS integration, Strangler Fig migration, Activity Type configuration, or revenue recognition patterns for T&M, fixed price, retainer, or managed services contracts. Trigger: SAP implementation, S/4HANA configuration, fit-to-standard, SAP modules, SAP localization, CATS integration, Strangler Fig, Activity Types, revenue recognition.
Software architecture design — modules, layers, boundaries, design patterns, ADRs, quality attributes, and technical debt strategy. Use when the user asks to "design the internal structure", "define module boundaries", "select architecture patterns", "document architecture decisions", "evaluate code architecture", or mentions CQRS, Hexagonal, Event Sourcing, Clean Architecture, ADRs, or technical debt.
Software and technology viability validator — deep forensic analysis of whether proposed software solutions, AI/ML components, and technology choices are viable substance or speculative smoke. Covers service viability, platform viability, methodology viability, tool viability, and vendor assessment for any service type. Use when the user asks to "validate technology viability", "detect vaporware", "verify AI claims", "assess software maturity", "check if this tech actually works", or mentions technology due diligence, software validation, AI feasibility, vendor evaluation, or tech-stack viability. This is the devoted software-specific validator — separate and more critical than the multidimensional feasibility analysis.
Complete transformation roadmap with phased execution, investment horizon, team ramp-up, risk-adjusted timeline, and estimation pivot points. Use when the user asks to "create a roadmap", "plan the transformation", "build an investment case", "team sizing", "risk-adjusted timeline", or mentions "Phase 4", "solution roadmap", "transformation plan", "phased execution", "PoC validation criteria", "kill criteria", "go/no-go gates".
End-to-end solution design — system integration, channel orchestration, identity management, observability, and cross-cutting concerns. Use when the user asks to "design the full solution", "integrate multiple systems", "plan API gateway strategy", "define identity and security architecture", "set up observability", or mentions C4 containers, BFF, Zero Trust, SLI/SLO, circuit breaker, or migration planning.
Use when the user asks to "implement Spotify model", "design squads and tribes", "organize chapters and guilds", "create autonomous team structure", "apply Spotify engineering culture", or mentions Spotify, squads, tribes, chapters, guilds, autonomous teams, matrix organization, squad health check model.
Staff augmentation discovery — talent gap analysis, skills matrix profiling, team composition modeling, onboarding and ramp-up design, retention framework, and staffing roadmap. Use when the user asks to "assess staffing needs", "analyze talent gaps", "design team composition", "plan staff augmentation", "evaluate team skills", "create staffing roadmap", "onboarding plan", "ramp-up strategy", "retention framework", or mentions talent gap, skills matrix, team topology, augmentation, nearshore, offshore, or staffing plan.
Use when the user asks to "plan staff augmentation", "source external resources", "plan contractor onboarding", "design nearshore team integration", "manage vendor staffing", or mentions staff augmentation, contractor sourcing, augmentation needs, external staffing, nearshore/offshore, resource augmentation strategy.
Stakeholder analysis — influence/interest matrix, communication plan, RACI, change readiness. Use when the user asks to "map stakeholders", "build influence matrix", "create communication plan", "assign RACI", "assess change readiness", "identify champions", or mentions stakeholder analysis, power/interest grid, engagement strategy, or adoption curve.
Use when the user asks to "identify stakeholders", "create stakeholder register", "map stakeholder power/interest", "analyze stakeholders", "design engagement strategies", or mentions stakeholder identification, power-interest matrix, influence mapping, stakeholder analysis, engagement level assessment.
Use when the user asks to "generate status report", "write weekly update", "create sprint report", "produce executive summary", "compile progress report", or mentions status report, weekly report, sprint summary, project update, progress report, RAG status update.
Use when the user asks to "run a steering committee", "prepare steering review", "conduct Go/No-Go gate", "orchestrate advisory vote", "prepare gate review package", or mentions steering committee, steering review, Go/No-Go decision, advisory vote, project gate review, steering minutes, 7-advisor evaluation.
Narrative arc design and transformation metodologia-storytelling for discovery deliverables. Use when structuring the overall narrative across deliverables, building scenario narratives, crafting transformation stories (current pain → decision → future state), or designing risk narratives and success reference stories.
Use when the user asks to "check strategic alignment", "map projects to strategy", "track OKRs", "identify strategic orphans", "verify portfolio-strategy fit", or mentions strategic alignment, strategy-to-project traceability, OKR tracking, balanced scorecard alignment, portfolio investment alignment.
Green IT evaluation, carbon footprint estimation, energy efficiency analysis, and sustainable architecture pattern recommendations. Use when the user asks to "assess sustainability", "estimate carbon footprint", "evaluate green IT", or mentions energy efficiency, sustainable architecture, or environmental impact of technology.
Use when the user asks to "track team performance", "measure velocity", "assess team health", "monitor team morale", "analyze productivity trends", or mentions team performance, velocity tracking, team health, morale, burndown, team metrics, sprint predictability.
Use when the user asks to "design team structure", "apply Team Topologies", "optimize team boundaries", "reduce cognitive load", "map team interaction modes", or mentions Team Topologies, Conway's Law, stream-aligned teams, platform teams, enabling teams, cognitive load, team interaction patterns.
Conway's Law analysis, team interaction modes, cognitive load assessment, organizational design. Use when the user asks to "design team structure", "assess cognitive load", "map team interactions", "apply Conway's Law", or mentions stream-aligned teams, platform teams, enabling teams, team-first thinking.
Technical debt quantification, debt quadrant classification (reckless/prudent x deliberate/inadvertent), remediation prioritization, and paydown roadmap generation. Use when the user asks to "assess technical debt", "quantify debt", "classify tech debt", "prioritize remediation", or mentions debt inventory, impact scoring, or paydown planning.
Technical fact-checking and multidimensional feasibility analysis — validates claims, assumptions, and technical decisions from scenario analysis against evidence. Use when the user asks to "validate feasibility", "fact-check the scenario", "verify technical claims", "run feasibility analysis", "stress-test the approach", or mentions technical due diligence, feasibility study, risk validation, or "Phase 3b" verification work.
Technical documentation precision — progressive disclosure, terminology consistency, evidence attribution, and reproducible analysis. Use when writing AS-IS analyses, functional specs, architecture documents, handover guides, or any deliverable requiring technical rigor and documentation standards.
Structured technology monitoring across analyst firms (Gartner, Forrester, IDC), academic sources (Stanford HAI, IEEE, ACM), editorial platforms (O'Reilly Radar, ThoughtWorks Tech Radar), and individual thought leaders (Martin Fowler, Paulo Caroli, Gregor Hohpe, Jez Humble). Produces vigilance reports with signals classified by urgency and impact. Use when evaluating technology trends, preparing sector-specific tech intelligence, validating technology choices against current landscape, or when "vigilancia tecnológica", "tech watch", "Gartner", "Forrester", "tech radar", "Stanford HAI", "IEEE", or "tendencias tecnológicas" is mentioned.
Test strategy design — pyramid, automation, E2E, contract testing, shift-left, test data management, QA-as-a-service strategy, test factory design, PITT methodology, QA CoE design. Use when the user asks to "design test strategy", "build test automation", "implement contract testing", "manage test data", "define quality gates", or mentions test pyramid, Pact, Playwright, Cypress, coverage targets, flaky tests, chaos engineering.
End-user advocate that evaluates deliverable clarity, cognitive load, accessibility, adoption risks, and biases. Use when the user asks to "review for clarity", "check readability", "evaluate from user perspective", "assess adoption risk", or mentions "user representative", "voice of the user", "representante del usuario", "clarity review", "cognitive load check".
UX/UI design discovery — design maturity assessment, design system inventory, user research capability evaluation, usability baseline, information architecture assessment, design process governance, and design transformation roadmap. Use when the user asks to "evaluate design maturity", "assess UX capability", "audit design system", "usability assessment", "information architecture review", "design ops evaluation", "UX transformation plan", or mentions "design discovery", "UX readiness", "design governance".
UX writing and document accessibility standards for technical deliverables. Use when the user asks to "improve readability", "fix information hierarchy", "reduce cognitive load", "write microcopy", "check readability score", or mentions "UX writing", "scanability", "Flesch-Kincaid", "escritura UX", "legibilidad", "cognitive load".
Vendor evaluation and selection framework — RFP/RFI design, scoring matrices, TCO analysis, contract risk. Use when the user asks to "evaluate vendors", "design RFP", "compare platforms", "assess TCO", or mentions vendor selection, build-vs-buy, technology evaluation, procurement strategy.
Use when the user asks to "compare vendor costs", "analyze TCO", "evaluate vendor proposals", "calculate total cost of ownership", "normalize vendor pricing", or mentions vendor comparison, total cost of ownership, vendor TCO, proposal evaluation, vendor scoring matrix, hidden cost analysis.
Use when the user asks to "manage vendors", "track vendor performance", "monitor SLAs", "evaluate supplier compliance", "create vendor scorecards", or mentions vendor management, supplier performance, SLA monitoring, contract compliance, vendor governance, vendor scorecard.
Use when the user asks to "assess waterfall maturity", "evaluate traditional PM practices", "check PMBOK adherence", "review predictive methodology readiness", "audit phase-gate compliance", or mentions waterfall assessment, traditional PM maturity, PMBOK compliance, PRINCE2 maturity, predictive PM evaluation, earned value adoption.
Use when the user asks to "implement waterfall", "plan PMBOK phases", "set up PRINCE2", "define stage gates", "design predictive lifecycle", "configure change control", or mentions waterfall, traditional PM, predictive lifecycle, stage-gate, PMBOK, PRINCE2, earned value management.
Workshop design methodology — event storming, impact mapping, user story mapping, design sprints. Replaces former workshop-facilitator (facilitation is the agent's job, design is the skill). Use when the user asks to "design a workshop", "plan event storming", "design impact mapping session", "design a sprint", "create user story map", "design discovery session", or mentions workshop design, design sprint, event storming, story mapping, or collaborative design.
Workshop design — event storming, impact mapping, user story mapping, design sprints. Use when the user asks to "plan a workshop", "run event storming", "facilitate impact mapping", "design a sprint", "create user story map", "facilitate discovery session", or mentions workshop facilitation, design sprint, event storming, story mapping, or collaborative design.
Comprehensive skill pack with 66 specialized skills for full-stack developers: 12 language experts (Python, TypeScript, Go, Rust, C++, Swift, Kotlin, C#, PHP, Java, SQL, JavaScript), 10 backend frameworks, 6 frontend/mobile, plus infrastructure, DevOps, security, and testing. Features progressive disclosure architecture for 50% faster loading.
The most comprehensive Claude Code plugin — 36 agents, 142 skills, 68 legacy command shims, and production-ready hooks for TDD, security scanning, code review, and continuous learning
Comprehensive PR review agents specializing in comments, tests, error handling, type design, code quality, and code simplification
Access thousands of AI prompts and skills directly in your AI coding assistant. Search prompts, discover skills, save your own, and improve prompts with AI.
Upstash Context7 MCP server for up-to-date documentation lookup. Pull version-specific documentation and code examples directly from source repositories into your LLM context.
Comprehensive toolkit for developing Claude Code plugins. Includes 7 expert skills covering hooks, MCP integration, commands, agents, and best practices. AI-assisted plugin creation and validation.