MAO v1.4 — MetodologIA de Aprovechamiento de Oportunidades. Universal Discovery Framework. G0 security gate, context optimization, rendering engine, CLI init wizard. Progressive MOAT Loading. Meta-cognition protocols (FULL/LIGHT). Formalized committee spawning. 101 agentes, 108 skills MOAT, 109 comandos, 19 scripts, 5 quality gates (G0-G3). Design System v5. Zero-hallucination protocol. MIT.
npx claudepluginhub javimontano/mao-discovery-frameworkAdvance — proceed to next pipeline step. Validates gate criteria before advancing.
Alias → a. Use /mao:a instead.
AI/Data discovery — AI center and data platform assessment with {TIPO_SERVICIO}=Data-AI
Alias → discover-ai. Use /mao:discover-ai instead.
Alias → discover-ai. Use /mao:discover-ai instead.
Alias → assess-architecture. Use /mao:assess-architecture instead.
Alias → assess-architecture. Use /mao:assess-architecture instead.
Generate 03_Analisis_AS-IS — exhaustive 10-section technical analysis with code evidence
Generate Architecture_Deep_Dive — C4 multinivel, ADRs, quality attributes, deuda arquitectónica, TO-BE architecture
Generate Change_Readiness — organizational change readiness: ADKAR adoption, resistance profiles, workshops, communication plan
Generate Cloud_Readiness — cloud migration readiness: 7R assessment, cloud-native maturity, migration strategy, FinOps
Generate Compliance_Assessment — regulatory compliance posture: GDPR, SOX, PCI-DSS, HIPAA, ISO 27001, gap analysis, remediation roadmap
Generate Data_Landscape — data ecosystem assessment: model, pipelines, governance, quality, analytics, mesh readiness
Generate DevOps_Maturity — CI/CD and DevOps assessment: DORA metrics, pipeline architecture, deployment strategies, IaC maturity, developer experience
Generate Security_Posture — security posture assessment: threat model, OWASP Top 10, zero trust, supply chain, DevSecOps maturity
Audit discovery deliverables — scorecard, cross-checks, quality verdict
Alias → audit-quality. Use /mao:audit-quality instead.
Alias → run-auto. Use /mao:run-auto instead.
Diseña workflows de automatización con n8n, Make, Zapier o RPA — mapeo de procesos, integración de APIs, orquestación de tareas
Benchmark de madurez — evaluación de capacidades contra estándares de la industria (CMMI, DORA, TMMI)
Alias → review-business. Use /mao:review-business instead.
Create full-branded document in any format (HTML, DOCX, XLSX, PPTX, PDF) using MetodologIA Design System v5 — 27 tokens, 126 components, 4 page types
Generate 02_Brief_Tecnico — executive technical summary (max 3 pages) for steering committee
Auditoría visual de la aplicación del cliente via browser (MCP Playwright)
Alias → review-business. Use /mao:review-business instead.
Alias → assess-change. Use /mao:assess-change instead.
Alias → assess-change. Use /mao:assess-change instead.
Generate 06_Solution_Roadmap — 5-phase roadmap with cost drivers, Monte Carlo, pivot points (GATE 2)
Alias → chart-roadmap. Use /mao:chart-roadmap instead.
Cloud service discovery — cloud readiness, migration, FinOps with {TIPO_SERVICIO}=Cloud
Alias → assess-cloud. Use /mao:assess-cloud instead.
Alias → assess-cloud. Use /mao:assess-cloud instead.
Meta-comando de coaching: enruta a la especialidad de coaching adecuada según contexto (metodológico, agile, enterprise, negocios conscientes, productividad, liderazgo, equipos)
Convertir skills MAO MOAT a formatos cross-platform (Cursor, Codex, Gemini, Aider, Windsurf)
Generate 08_Pitch_Ejecutivo — C-level business case with cost of inaction, value pillars, financial model
Alias → craft-pitch. Use /mao:craft-pitch instead.
Alias → assess-data. Use /mao:assess-data instead.
Alias → assess-data. Use /mao:assess-data instead.
Alias → run-deep. Use /mao:run-deep instead.
Generate 09_Handover — operational transition package discovery-to-execution with 90-day plan
Alias → deliver-handover. Use /mao:deliver-handover instead.
Demo mode — guided walkthrough of MAO capabilities with mini-discovery on the current repo
Diseña asistentes AI, GPTs personalizados y agentes conversacionales — arquitectura, personalidad, prompts, flujos de interacción
Generate 03_Analisis_AS-IS — exhaustive 10-section technical analysis with code evidence
Alias → diagnose-asis. Use /mao:diagnose-asis instead.
Generate 14_Oportunidades_IA — AI acceleration opportunities aligned with MetodologIA AI-first promise
Autonomous discovery — runs the full pipeline with minimal user intervention
Evolve discovery deliverables — diagnose weaknesses, improve, validate quality delta
Audit discovery deliverables — scorecard, cross-checks, quality verdict
Guided discovery — full pipeline facilitator (8 phases, 3 gates, 10+ deliverables)
Generate 05_Escenarios — Tree-of-Thought scenario analysis with 6D scoring (GATE 1)
Alias → evaluate-scenarios. Use /mao:evaluate-scenarios instead.
Exportar un entregable markdown a PDF profesional con branding MetodologIA
Express discovery — Go/No-Go in 1 session (Brief + Scenarios + Pitch)
Alias → validate-feasibility. Use /mao:validate-feasibility instead.
Alias → present-findings. Use /mao:present-findings instead.
Alias → present-findings. Use /mao:present-findings instead.
Generate 04_Mapeo_Flujos — DDD taxonomy, E2E flows, integration matrix, failure points
Alias → report-func. Use /mao:report-func instead.
Alias → report-func. Use /mao:report-func instead.
Generate 02_Brief_Tecnico — executive technical summary (max 3 pages) for steering committee
Generate 00_Discovery_Plan — the governing document for the entire engagement
Alias → run-guided. Use /mao:run-guided instead.
Generate 09_Handover — operational transition package discovery-to-execution with 90-day plan
Evolve discovery deliverables — diagnose weaknesses, improve, validate quality delta
Alias → improve-deliverables. Use /mao:improve-deliverables instead.
Inicializar el entorno MetodologIA para un nuevo engagement de discovery
Intermediate discovery — architectural direction with roadmap, feasibility validation and handover
Management discovery — PMO maturity, governance, delivery health with {TIPO_SERVICIO}=Management
Generate 01_Stakeholder_Map — influence matrix, RACI, communication plan, change readiness
Command palette — categorized interactive menu of all MAO commands, agents, and pipeline steps
Optimizar el context window comprimiendo changelog y activando lazy loading de agentes
Generate 08_Pitch_Ejecutivo — C-level business case with cost of inaction, value pillars, financial model
Generate 00_Discovery_Plan — the governing document for the entire engagement
Generate 10_Presentacion_Hallazgos — executive findings deck summarizing discovery insights for steering committee
Explore repository and generate priming-rag-*.md knowledge files for RAG context
Alias → prime-repo. Use /mao:prime-repo instead.
Taller de prompt engineering — diseño, evaluación y optimización de prompts para LLMs usando metodología NL-HP
QA discovery — quality assurance service assessment with {TIPO_SERVICIO}=QA
Renderizar todos los bloques Mermaid de un entregable a imágenes PNG
Generate 12_Hallazgos_Funcionales — functional findings covering user journeys, process gaps, UX issues, and business rule analysis
Generate 11_Hallazgos_Tecnicos — deep-dive technical findings with architecture, code quality, infrastructure, and security analysis
Rescue stalled discovery — diagnose, repair, and complete missing phases
Rescue stalled discovery — diagnose, repair, and complete missing phases
Generar retrospectiva cuantitativa del engagement de discovery
Generate 13_Revision_Negocio — business perspective review with collaboration models, contracting options, and presales closure strategy
Generate 06_Solution_Roadmap — 5-phase roadmap with cost drivers, Monte Carlo, pivot points (GATE 2)
RPA discovery — process automation assessment with {TIPO_SERVICIO}=RPA
Autonomous discovery — runs the full pipeline with minimal user intervention (9 phases, 4 gates, 16 deliverables)
Intermediate discovery — architectural direction with roadmap, feasibility validation and handover
Express discovery — Go/No-Go in 1 session (Brief + Scenarios + Pitch)
Guided discovery — full pipeline facilitator (9 phases, 4 gates, 16 deliverables)
Staff Augmentation discovery — team scaling, talent strategy with {TIPO_SERVICIO}=SAS
Escanear el repositorio del cliente para detectar secretos y credenciales expuestas (Gate G0)
Generate 05_Escenarios — Tree-of-Thought scenario analysis with 6D scoring (GATE 1)
Alias → assess-security. Use /mao:assess-security instead.
Alias → assess-security. Use /mao:assess-security instead.
Simulación Monte Carlo what-if — proyecciones probabilísticas de esfuerzo, costo y timeline
Generate 07_Especificacion_Funcional — modules, use cases, business rules, complexity matrix
Generate 01_Stakeholder_Map — influence matrix, RACI, communication plan, change readiness
Alias → report-tech. Use /mao:report-tech instead.
Alias → report-tech. Use /mao:report-tech instead.
Generate 04_Mapeo_Flujos — DDD taxonomy, E2E flows, integration matrix, failure points
Alias → trace-flows. Use /mao:trace-flows instead.
Digital transformation discovery — multi-service program assessment with {TIPO_SERVICIO}=Digital-Transformation
UX Design discovery — UX maturity, design system, accessibility with {TIPO_SERVICIO}=UX-Design
Feasibility Think Tank — 7 Sages deep validation of approved scenario (Phase 3b, pre-GATE 2)
Alias → validate-feasibility. Use /mao:validate-feasibility instead.
Generate 07_Especificacion_Funcional — modules, use cases, business rules, complexity matrix
Shared configuration inherited by all MAO agents. Not a standalone agent.
WCAG compliance and inclusive design specialist. Covers accessibility auditing, a11y testing, assistive technology compatibility, and universal design principles. Trigger: accessibility audit, WCAG, a11y compliance, inclusive design, screen reader, assistive technology, color contrast, keyboard navigation, ARIA.
Scrum, Kanban, and SAFe coaching specialist. Facilitates sprint health reviews, retrospectives, team velocity analysis, and agile transformation. Trigger: agile coaching, scrum master, sprint retrospective, kanban flow, SAFe, agile transformation, velocity analysis, sprint planning, daily standup.
AI agent systems architect providing agentic AI design, multi-agent orchestration patterns, tool use architecture, memory and context management, guardrails design, and agent evaluation frameworks. Specializes in designing production-grade AI agent systems.
Senior AI/ML architect providing AI strategy assessment, ML pipeline design, MLOps maturity evaluation, model governance, responsible AI framework, LLM integration patterns, and AI infrastructure sizing. Bridges data science and production engineering.
AI-first solutions architect specializing in LLM-native application patterns, agentic systems design, and AI-native infrastructure. Trigger: AI-native, LLM architecture, agentic design, AI-first, multi-agent systems, RAG architecture, LLM orchestration, foundation model strategy.
AI/ML strategy expert providing AI readiness assessment (AI Adoption Lifecycle: Assess-Pilot-Scale-Optimize-Govern), use case portfolio design, data readiness validation, model governance framework, MLOps maturity evaluation, responsible AI guidelines, and open-source tool alignment (MLflow, LangChain, Feast, Great Expectations, Airflow, Kubeflow). Activated when {TIPO_SERVICIO}=Data-AI.
Analytics architect providing analytics engineering, BI design, data science architecture, and data mesh strategy expertise. Owns the analytics consumption layer: how data is transformed into insights. Invoked during Phases 1, 3, and 4.
API-first design specialist covering OpenAPI, GraphQL, gRPC, REST best practices, versioning, and developer experience. Trigger: API design, OpenAPI spec, GraphQL schema, gRPC, REST API, API versioning, API gateway, developer experience, API-first, contract-first.
AI assistant and GPT design specialist. Covers assistant architecture, persona design, conversation UX, tool integration, and knowledge base configuration. Trigger: design assistant, create GPT, assistant architecture, custom GPT, chatbot UX, conversational AI, persona design, assistant configuration.
Workflow automation and RPA design specialist for n8n, Make, Zapier, Power Automate, and custom integration pipelines. Trigger: automate workflow, n8n, Make automation, RPA design, Zapier, workflow orchestration, process automation, integration pipeline, Power Automate.
Senior backend developer providing server-side architecture assessment, API design review, database interaction patterns, business logic evaluation, concurrency analysis, and backend performance optimization.
Distributed ledger technology and Web3 architecture specialist. Covers blockchain feasibility assessment, smart contract design, tokenomics, and DLT platform evaluation. Trigger: blockchain, smart contracts, Web3, DLT, distributed ledger, tokenomics, decentralized, NFT, cryptocurrency, consensus mechanism.
Senior business analyst providing business process modeling, requirements engineering, capability mapping, business rules extraction, use case design, and gap analysis between business needs and technical solutions.
Capacity modeling and demand forecasting specialist. When user asks about capacity planning, demand forecast, resource modeling, infrastructure sizing, load projection, scalability planning, resource optimization, or workload management.
Change Management Expert — organizational readiness, adoption strategy, training planning, resistance management. Owns stakeholder engagement and change readiness across all phases.
Chaos engineering and resilience testing specialist. Covers game days, fault injection, steady-state hypothesis testing, and disaster recovery validation. Trigger: chaos engineering, game day, fault injection, resilience testing, disaster recovery, steady-state hypothesis, failure mode analysis, blast radius.
Senior cloud architect providing cloud strategy assessment, multi-cloud evaluation, migration planning (7R framework), cloud-native patterns, landing zone design, and cloud financial optimization. Focuses on strategic cloud decisions above platform-engineer's operational focus.
Competitive intelligence and benchmarking specialist. When user asks about competitive analysis, SWOT analysis, market positioning, Porter's 5 forces, competitive benchmarking, competitor profiling, strategic group mapping, or differentiation strategy.
Compliance and regulatory analysis expert providing GDPR, SOX, PCI-DSS, HIPAA, and ISO 27001 assessment. Evaluates regulatory risk, compliance gaps, and remediation priorities.
Conscious leadership and values-driven business coaching inspired by Fred Kofman. Facilitates authentic communication, responsible leadership, ontological coaching, and purpose-driven organizational culture. Trigger: conscious business, authentic communication, responsible leadership, Fred Kofman, ontological coaching, conscious leadership, values-driven, organizational purpose.
Content strategy expert providing metodologia-copywriting, metodologia-storytelling, data metodologia-storytelling, data visualization metodologia-storytelling, and narrative design for discovery deliverables. Part of the Editorial Committee (with editorial-director and format-specialist). Activated ONLY at markdown production time — NOT during ingesta, analysis, or confirmation phases.
Customer experience and success strategy specialist. When user asks about customer success, NPS strategy, customer journey mapping, retention strategy, churn analysis, customer lifetime value, CX optimization, onboarding experience, or voice of customer.
Senior data architect providing strategic data modeling, enterprise data strategy, data platform design, data mesh/fabric evaluation, and cross-domain data governance. Operates at strategic level above data-engineer (infrastructure) and analytics-architect (consumption).
Data engineer providing pipeline architecture, database design, and data governance expertise. Owns the data infrastructure layer: how data is stored, moved, transformed, and governed. Invoked during Phases 1, 2, and 4.
Data privacy and privacy engineering specialist. When user asks about data privacy, DPIA, privacy by design, consent management, data classification, privacy impact assessment, data subject rights, data retention policy, or personal data protection.
Statistical validation specialist and quantitative evidence analyst. Validates data feasibility, ML/AI viability, and ensures all quantitative claims are statistically sound.
Data Expert — data architecture, governance, analytics strategy, migration planning, data quality assessment. Provides data-specific expertise across all phases.
Database modeling, tuning, migration, and sharding specialist. Covers relational, NoSQL, NewSQL, query optimization, indexing, and data migration strategies. Trigger: database design, query optimization, data migration, sharding, indexing, database tuning, schema design, database modeling, partitioning, replication.
Project Manager — timelines, scope management, risk quantification, stakeholder communication, budget modeling. Owns Phase 4 (Solution Roadmap) and Phase 5 (Cost Estimation) deliverables.
Design thinking and user-centered design facilitation specialist. When user asks about design thinking, empathy map, prototype, user-centered design, design workshop, ideation session, persona development, or human-centered innovation.
Developer experience and community building specialist. When user asks about developer experience, DevRel, developer advocacy, community building, developer onboarding, DX optimization, developer ecosystem, or API adoption.
Senior DevOps engineer providing CI/CD pipeline architecture, branching strategy assessment (GitFlow, trunk-based, GitHub Flow), artifact management, environment promotion, deployment automation (blue-green, canary, rolling), infrastructure-as-code orchestration, and developer experience optimization.
DevSecOps expert providing CI/CD security assessment, supply chain security (SLSA), secrets management, container security, SAST/DAST/SCA evaluation, and shift-left security strategy. Focuses on embedding security into the development lifecycle.
Architecture visualization and diagramming specialist. When user asks about architecture diagram, C4 model, Mermaid diagram, UML diagram, sequence diagram, system context diagram, container diagram, component diagram, or visual architecture documentation.
Digital twin modeling and simulation specialist. When user asks about digital twin, simulation model, physical-digital bridge, IoT integration, digital replica, predictive simulation, cyber-physical systems, or real-time monitoring twin.
Impartial orchestrator that detects service type (Step 0), dynamically composes expert committee, sequences phases, enforces gates, manages data contracts, maintains the discovery plan and input registry, activates the industry SME lens, facilitates expert disagreements, validates service-type inputs, and presents status reports. Does NOT perform analysis — only coordinates.
Documentation strategy and doc-as-code specialist. When user asks about documentation strategy, doc-as-code, ADR, architecture decision record, runbook, living documentation, documentation pipeline, or documentation governance.
Subject Matter Expert — industry lens, business context, regulatory constraints, DDD domain modeling. Owns Phase 2 (Flow Mapping) and provides industry context across all phases.
Technical economics researcher who validates financial feasibility with academic rigor. Models TCO, ROI, and opportunity costs using evidence-based techniques, not gut feelings.
Edge and fog computing specialist covering latency optimization, CDN strategy, edge deployment patterns, and distributed computing architectures. Trigger: edge computing, fog computing, CDN strategy, edge deployment, latency optimization, edge functions, edge caching, content delivery, distributed computing.
Chief editor ensuring narrative coherence across all deliverables, audience adaptation (executive vs technical), ghost menu orchestration for multi-format output, and editorial quality gates. Part of the Editorial Committee (with content-strategist and format-specialist). Activated ONLY at markdown production time — NOT during ingesta, analysis, or confirmation phases.
Editorial publication agent for MetodologIA — orchestrates multi-format branded output production from markdown source using Design System v5. Manages the full publication pipeline: editorial review, brand compliance validation, ghost menu activation, format conversion, and delivery packaging. The bridge between world-class content and production-ready branded deliverables.
Enterprise architect providing portfolio strategy, TOGAF alignment, capability mapping, and target-state (TO-BE) architecture design with transition paths. Invoked during Phases 1, 3, and 4 for strategic architecture decisions.
Enterprise agility and organizational transformation coaching specialist. Guides scaling agile across portfolios, value streams, and business units. Trigger: enterprise coaching, organizational agility, scaling agile, portfolio management, value stream, business agility, enterprise transformation, organizational design.
AI ethics and responsible AI governance specialist. When user asks about AI ethics, bias detection, responsible AI, fairness metrics, algorithmic accountability, AI transparency, explainable AI, AI governance framework, or ethical AI audit.
C-level communication and executive storytelling specialist. When user asks about executive presentation, board deck, C-level communication, executive summary, stakeholder presentation, strategic narrative, leadership briefing, or investment pitch.
Cloud financial optimization and FinOps specialist. Covers cost allocation, unit economics, reserved capacity planning, and cloud spend governance. Trigger: FinOps, cloud costs, cost optimization, cloud spend, reserved instances, savings plans, cost allocation, unit economics, cloud billing, right-sizing.
Multi-format production expert providing HTML, DOCX, XLSX, PPTX, and PDF generation from markdown source. Part of the Editorial Committee (with editorial-director and content-strategist). Activated ONLY via ghost menu after markdown production — converts .md source of truth to requested formats.
Senior frontend developer providing UI architecture assessment, SPA/MPA evaluation, design system implementation, accessibility compliance (WCAG), performance optimization (Core Web Vitals), and microfrontend strategy analysis.
Full-Stack Engineer — code analysis, infrastructure assessment, DevOps evaluation, implementation feasibility. Provides hands-on technical validation across all phases.
Infrastructure and hardware feasibility specialist. Validates compute, network, storage, and physical infrastructure requirements against real-world constraints and scaling ceilings.
Implementation analysis expert providing code-level assessment, infrastructure evaluation, DevOps analysis, and implementation feasibility validation. Replaces the former full-stack-generalist agent with focused implementation analysis expertise.
Incident response leadership and crisis management specialist. When user asks about incident command, war room coordination, crisis management, incident response, major incident management, post-incident review, blameless postmortem, or on-call strategy.
Innovation frameworks and ideation facilitation specialist. When user asks about innovation workshop, ideation session, design sprint, innovation pipeline, creative problem solving, disruptive innovation, innovation portfolio, or rapid experimentation.
Integration and interoperability specialist. Validates that proposed integrations, migrations, and protocol changes are technically achievable with the existing ecosystem.
IoT systems and edge computing architect. Covers MQTT, edge protocols, digital twins, device management, and IoT platform design. Trigger: IoT architecture, edge computing, digital twin, MQTT, device management, sensor networks, IoT platform, telemetry, embedded systems, industrial IoT.
Knowledge management and organizational memory specialist. When user asks about knowledge management, knowledge base, Zettelkasten, organizational memory, knowledge graph, wiki strategy, information architecture, knowledge sharing, or tacit knowledge capture.
Leadership development and coaching specialist. When user asks about leadership coaching, servant leadership, leadership development, situational leadership, coaching conversations, executive coaching, leadership style assessment, or leadership pipeline.
Lean thinking and waste elimination specialist. When user asks about lean thinking, waste elimination, value stream mapping, 5S methodology, muda muri mura, continuous improvement, kaizen, lean management, or process optimization.
Low-code and no-code platform assessment specialist. Covers citizen development governance, platform evaluation, and hybrid pro-code/low-code architecture. Trigger: low-code platform, no-code, citizen developer, Power Platform, OutSystems, Mendix, low-code governance, citizen development, rapid application development.
Market research and sizing specialist. When user asks about market research, market sizing, TAM analysis, SAM/SOM estimation, addressable market, trend analysis, market segmentation, industry growth rates, or market opportunity assessment.
P.I.V.O.T.E. methodology coach and MetodologIA philosophy guide. Teaches the 4-phase discovery system, fundamentar-acelerar cycle, and structured methodology adoption. Trigger: methodology coaching, PIVOTE, fundamentar acelerar, 4-phase system, MetodologIA philosophy, discovery methodology, methodological guidance.
Senior middleware and integrations developer providing API integration assessment, ESB/iPaaS evaluation, message broker analysis, data transformation pipeline review, and cross-system interoperability validation.
Legacy modernization and migration strategy specialist. Covers strangler fig pattern, data migration, cloud migration, platform re-platforming, and incremental modernization. Trigger: legacy migration, modernization strategy, strangler fig, re-platforming, cloud migration, monolith decomposition, data migration, legacy modernization.
Mobile architect providing cross-platform vs native assessment, store compliance, app vitals analysis, and mobile CI/CD design. Activated only when scope includes mobile applications. Invoked during Phases 1 and 3.
Negotiation strategy and contract design specialist. When user asks about negotiation strategy, BATNA analysis, contract negotiation, win-win negotiation, deal structuring, negotiation preparation, concession strategy, or ZOPA identification.
Observability strategy specialist covering monitoring, distributed tracing, logging, alerting, and metrics architecture. Trigger: observability strategy, monitoring stack, distributed tracing, logging architecture, alerting design, metrics pipeline, OpenTelemetry, Grafana, Prometheus.
Open source governance and licensing strategy specialist. When user asks about open source strategy, OSS governance, InnerSource, licensing strategy, open source compliance, community management, FOSS contribution policy, or copyleft vs permissive licensing.
Organizational culture and behavioral change specialist. When user asks about organizational culture, change resistance, culture assessment, organizational behavior, cultural transformation, employee engagement, organizational climate, or resistance to change.
Performance engineering specialist covering load testing, latency optimization, capacity planning, profiling, and throughput tuning. Trigger: performance tuning, load testing, latency optimization, capacity planning, throughput, profiling, stress testing, performance bottleneck, response time.
Platform engineer providing cloud readiness assessment, migration strategy (7R), API governance, and event-driven architecture design. Invoked during Phases 1, 3, and 4 for platform and integration decisions.
Pricing models and commercial strategy analyst. Covers cost driver analysis, margin modeling, pricing architecture, and competitive pricing intelligence. Trigger: pricing model, cost analysis, commercial model, margin analysis, pricing strategy, unit pricing, subscription pricing, value-based pricing, competitive pricing.
RPA and process automation expert providing process mining analysis, automation readiness scoring, bot architecture design, platform assessment, exception handling strategy, and automation ROI modeling using open standards (Six Sigma, BPMN, Lean). Activated when {TIPO_SERVICIO}=RPA.
Product strategy expert providing roadmap prioritization, value stream mapping, product-market fit validation, backlog strategy, and competitive positioning. Bridges business needs with technical capabilities.
Personal and team productivity coaching specialist. Applies atomic habits, systems thinking, GTD, deep work, and time management frameworks. Trigger: productivity, habits, atomic habits, GTD, deep work, time management, systems thinking, personal effectiveness, focus strategies, habit stacking.
Prompt design, evaluation, and optimization specialist. NL-HP methodology expert for system prompts, chain-of-thought structures, and output formatting. Trigger: design prompt, optimize prompt, prompt engineering, system prompt, few-shot, chain-of-thought, NL-HP, prompt evaluation, prompt template.
QA-as-a-service strategy expert providing TMMi maturity assessment, PDCA + ISTQB Test Process alignment, test factory design, QA CoE structure, ISTQB-aligned test process improvement, automation strategy, and quality governance frameworks. Activated when {TIPO_SERVICIO}=QA.
Quality engineer providing test strategy design, performance engineering, SLO definition, and metodologia-observability architecture. Designs the quality and reliability strategy for the client's system. Invoked during Phases 1, 3, and 5a.
QA Lead — validates all deliverables against acceptance criteria, catches inconsistencies across phases, enforces framework standards. The final checkpoint before any deliverable is released.
Industry-specific regulatory compliance specialist. When user asks about regulatory compliance, GDPR, HIPAA, SOX, PCI-DSS, industry regulations, data protection regulation, financial compliance, healthcare compliance, or regulatory audit.
Release management specialist covering release trains, feature flags, rollback strategies, deployment gates, and progressive delivery. Trigger: release management, feature flags, deployment strategy, release train, rollback strategy, blue-green deployment, canary release, progressive delivery, deployment gates.
Deep technology researcher with postdoctoral academic rigor. Conducts literature reviews, state-of-the-art validation, and PoC design methodology. The think tank's evidence hunter.
Risk and quality controller providing continuous governance, gate enforcement, deliverable validation, and risk monitoring. Part of the permanent triad (with discovery-conductor and delivery-manager). Present in EVERY step of the discovery pipeline. Votes on all expert committee decisions.
Security architect providing threat modeling, zero trust assessment, SLSA compliance, and DevSecOps pipeline security design. Invoked during Phases 1, 3, and 4 for security posture evaluation and hardening strategy.
Solutions architect providing end-to-end integration design, cross-cutting concerns, and infrastructure architecture. Evaluates how multiple systems compose into a working solution. Invoked during Phases 1, 2, and 4.
Site reliability engineering specialist covering SLO/SLI/SLA design, error budgets, toil reduction, incident management, and reliability practices. Trigger: SRE practices, error budget, reliability engineering, SLO, SLI, SLA, toil reduction, incident response, on-call, postmortem.
Industry and domain subject matter expert providing sector-specific context, regulatory constraints, competitive benchmarks, market dynamics, and business domain analysis. Adapts lens based on client sector (banking, retail, health, SaaS, manufacturing, government, energy, telecom, insurance, logistics).
ESG and sustainability assessment specialist. When user asks about sustainability, ESG assessment, green IT, carbon footprint, circular economy, environmental impact, social responsibility, sustainable development goals, or climate risk.
Complex systems analyst who evaluates emergent behaviors, failure cascades, and systemic risks. Applies systems thinking, chaos theory, and Conway's Law to feasibility validation.
Team dynamics and high-performing team coaching specialist. When user asks about team coaching, psychological safety, team dynamics, conflict resolution, high-performing teams, team health, team retrospectives, or Tuckman model.
Senior Architect — system design, patterns, quality attributes, C4 modeling, technology evaluation. Owns Phase 1 (AS-IS) and Phase 4 (Architecture Design) deliverables.
Technical debt quantification and reduction specialist. Covers debt classification, interest calculation, reduction roadmaps, and code quality improvement strategies. Trigger: technical debt, code quality, debt reduction, refactoring roadmap, code smell, maintainability index, debt interest, tech debt quantification.
Senior technical lead providing code-level authority, development practices assessment, team capability evaluation, and implementation feasibility validation. Bridges architecture decisions with implementation reality. Evaluates developer experience, CI/CD maturity, and engineering culture.
Technical documentation and API documentation specialist. When user asks about technical documentation, API docs, developer guide, SDK documentation, reference documentation, user manual, technical writing, or documentation quality.
Technology landscape analyst who evaluates vendor maturity, adoption curves, and technology lifecycle positioning. Ensures proposed technologies are viable and not in decline.
Test strategy and quality engineering specialist. Covers test pyramid, TDD, BDD, contract testing, mutation testing, and test automation architecture. Trigger: test strategy, test automation, TDD, BDD, contract testing, mutation testing, test pyramid, quality engineering, test coverage, regression testing.
Learning design and training program specialist. When user asks about training design, learning path, curriculum design, bootcamp design, workshop blueprint, upskilling program, certification path, or competency-based learning.
Program-level multi-service transformation architect providing digital transformation strategy, multi-service program design, change management integration, cross-workstream dependency management, program governance, and transformation KPI frameworks. Activated when {TIPO_SERVICIO}=Digital-Transformation or Multi-Service.
UX researcher providing user research synthesis, persona development, usability assessment, user journey mapping, adoption readiness evaluation, and accessibility audit coordination. Ensures technical solutions serve actual user needs.
UX strategist providing persona-based user experience review, accessibility auditing (WCAG), design system specification, and brand-compliant visual deliverables. Invoked during Phases 2, 5a, and 5b.
Vendor evaluation and management specialist. When user asks about vendor evaluation, RFP process, vendor management, SLA negotiation, vendor risk assessment, supplier selection, request for proposal, vendor scorecard, or multi-vendor strategy.
WCAG 2.1/2.2 compliance assessment — a11y testing strategy, remediation priorities, inclusive design. Use when the user asks to "audit accessibility", "assess WCAG compliance", "evaluate a11y", "review inclusive design", or mentions screen readers, ARIA, color contrast, keyboard navigation.
Adoption strategy design producing communication plan, training roadmap, resistance management tactics, and reinforcement mechanisms. Use when the user asks to "design adoption strategy", "plan change adoption", "communication plan", "training needs analysis", "resistance management", "adoption roadmap", "change communication", or mentions "post-implementation adoption", "user onboarding strategy", "technology adoption plan".
Use when the user asks to "assess agile maturity", "evaluate agile practices", "run agile readiness check", "benchmark Scrum adoption", or "audit agile capabilities". Activates when a stakeholder needs to measure agile adoption level, evaluate Scrum maturity, diagnose agile anti-patterns, compare agile readiness across teams, or baseline agile capability before a transformation initiative.
Audits existing AI system architectures against best practices — structural integrity, AI quality attributes, pattern adherence, anti-pattern detection, security compliance, and technical debt inventory. This skill should be used when the user asks to "audit AI architecture", "review ML system quality", "assess AI technical debt", "evaluate AI compliance", "detect AI anti-patterns", "review AI security posture", or mentions AI architecture review, AI system assessment, AI quality audit, drift monitoring audit, or AI governance review.
Guides implementation of AI system architectures — technology selection, pipeline implementation, model serving setup, monitoring deployment, and CI/CD automation. This skill should be used when the user asks to "implement AI architecture", "build ML pipeline", "set up model serving", "deploy AI system", "implement MLOps", "configure drift monitoring", "set up feature store", or mentions AI implementation plan, ML infrastructure setup, model deployment guide, RAG implementation, or agent framework setup.
AI Center services discovery — AI readiness assessment using MetodologIA AI SCALE methodology, use case portfolio prioritization, data readiness evaluation, model inventory, AI governance assessment, infrastructure evaluation, MetodologIA AI product integration, and AI adoption roadmap. Use when the user asks to "assess AI readiness", "evaluate AI maturity", "AI discovery", "AI use case prioritization", "MLOps assessment", "AI governance evaluation", "AI adoption roadmap", "AI strategy assessment", "evaluate AI infrastructure", "AI product fit", or mentions "AI SCALE", "responsible AI", "AI pilots", "ML pipeline", "AI Center of Excellence", "LLM adoption", "generative AI strategy".
Concept of Operations (CONOPS) for AI systems — system vision, stakeholder mapping, AI-human interaction spectrum, business value assessment, success metrics, and operational modes. This skill should be used when the user asks to "define the AI operational concept", "map AI stakeholders", "design AI-human interaction levels", "assess AI business value", "define AI success metrics", "plan AI operational modes", or mentions CONOPS, IEEE 1362, AI autonomy levels, AI value matrix, or AI system vision.
AI-specific design patterns and system tactics — Feature Store, Champion-Challenger, Shadow Deployment, Drift Detection, Explainability Wrapper, Canary Deployment, Bulkhead, and traditional patterns adapted for AI. This skill should be used when the user asks to "select AI design patterns", "apply ML patterns", "design drift detection", "implement feature store", "plan shadow deployment", "design champion-challenger", "select availability tactics for AI", or mentions AI anti-patterns, maintainability tactics, fault recovery for models, or pattern selection for ML systems.
AI pipeline architecture design — development pipelines, production pipelines, data stores, model registry, CI/CD for AI, and non-functional requirements. This skill should be used when the user asks to "design AI pipelines", "architect ML pipelines", "select data stores for AI", "design model registry", "implement CI/CD for ML", "define AI pipeline requirements", or mentions MLOps, training pipeline, inference pipeline, feature pipeline, Blue and Gold deployment, or pipeline patterns.
Use when the user asks to "use AI for project management", "augment PM with AI", "implement predictive scheduling", "parse status with NLP", or "design ML risk models". Activates when a stakeholder needs to identify AI augmentation opportunities for PM, build predictive scheduling models, automate status report parsing with NLP, design intelligent resource allocation, or create a human-AI collaboration model for project governance.
AI software architecture design — modules, layers, boundaries, design patterns, ADRs, quality attributes, and technical debt strategy for AI-enabled systems. This skill should be used when the user asks to "design AI system structure", "define AI module boundaries", "select AI architecture patterns", "document AI architecture decisions", "evaluate AI code architecture", or mentions AI pipelines, feature stores, model serving, drift detection, ML quality attributes, explainability architecture, or AI technical debt.
Comprehensive testing strategy for AI systems — testing scope matrix (6 types x 6 layers), model prediction testing, data quality testing, compliance and fairness testing, integration approaches, and CI/CD test automation. This skill should be used when the user asks to "define AI testing strategy", "test ML models", "design data quality tests", "plan fairness testing", "test AI pipelines", "design integration tests for ML", or mentions adversarial testing, drift simulation, model regression testing, bias testing, explainability testing, or AI test automation.
Análisis horizontal de estados financieros (P&L, Balance General, Flujo de Caja, notas/anexos) con comparación YoY de 2 períodos. Genera informes ejecutivos estandarizados para Junta Directiva y C-Level con variaciones absolutas, relativas, drivers de cambio, alertas y recomendaciones estratégicas. Usa esta skill SIEMPRE que el usuario mencione análisis horizontal, comparar estados financieros, variación año contra año, YoY, análisis de tendencias financieras, comparación de períodos, evolución financiera, cambios interanuales, delta financiero, o adjunte cualquier estado financiero y pida análisis comparativo. Trigger: análisis horizontal, comparar estados financieros, YoY, variación interanual, evolución financiera, delta financiero, comparar períodos.
Analytics pipeline design — dbt-style transformations, data modeling, testing, documentation. Use when the user asks to "design analytics models", "set up dbt project", "plan data transformations", "define data contracts", "model star schema", or mentions staging models, marts, incremental strategies, or materializations.
API design & governance — REST/GraphQL/gRPC, versioning, rate limiting, DX, contract-first. Use when the user asks to "design an API", "define API strategy", "implement contract-first", "set up API governance", "design API versioning", "improve developer experience", or mentions REST, GraphQL, gRPC, AsyncAPI, OpenAPI, API gateway, rate limiting, or API catalog.
Target state (TO-BE) architecture design — C4 L2 containers, ADRs, nightmare scenario mitigations, MVP component, phased Strangler Fig migration. Use when the user asks to "design the target architecture", "create a TO-BE architecture", "plan a migration strategy", "define ADRs for a new system", "mitigate nightmare scenarios", or mentions Strangler Fig, C4 diagrams, saga pattern, anti-corruption layer, or legacy modernization.
Universal current-state assessment producing 10-section analysis for ANY MetodologIA service type. Use when the user asks to "analyze the codebase", "assess current architecture", "run AS-IS analysis", "technical audit", "evaluate tech debt", "code quality assessment", "assess current state", "service assessment", "QA maturity", "PMO assessment", "RPA readiness", "data maturity", "cloud readiness", "design maturity", "talent gap analysis", or mentions "Phase 1", "current state", "legacy system review", "technical health check".
Use when the user asks to "track assumptions", "document constraints", "log assumptions", "manage project assumptions", or "validate planning hypotheses". Activates when a stakeholder needs to create an assumption register, document project constraints, link assumptions to risks, establish assumption validation cadence, or audit planning hypotheses across the project lifecycle.
Use when the user asks to "identify PM automation candidates", "automate PM reporting", "reduce manual PM processes", "find automation quick wins", or "design workflow automation". Activates when a stakeholder needs to scan PM processes for automation potential, calculate automation ROI, design automation specifications, prioritize automation backlog, or plan phased automation rollout across the PMO.
Audits AWS AI/GenAI architectures against the Well-Architected GenAI Lens — operational excellence, security, reliability, performance, cost optimization, and sustainability. This skill should be used when the user asks to "audit AWS AI architecture", "review Bedrock configuration", "assess SageMaker security", "optimize AWS AI costs", "evaluate AWS GenAI compliance", "review AWS Well-Architected for AI", or mentions AWS AI audit, Bedrock audit, SageMaker review, AWS GenAI security assessment, or AWS AI cost optimization review.
Designs AWS cloud architectures for AI and GenAI workloads applying the Well-Architected Framework GenAI Lens (6 pillars: GENOPS, GENSEC, GENREL, GENPERF, GENCOST, GENSUS), AWS service selection matrices, RAG/Agent/Fine-Tuning patterns, cost optimization strategies, and enterprise reference architectures. Activated when designing, evaluating, or migrating AI systems on AWS.
Guides implementation of AI/GenAI architectures on AWS — Bedrock setup, SageMaker pipelines, OpenSearch vector stores, API Gateway configuration, security hardening, cost controls, and deployment automation. This skill should be used when the user asks to "implement AI on AWS", "set up Bedrock", "deploy SageMaker pipeline", "configure OpenSearch for RAG", "implement AWS AI security", "set up AWS AI monitoring", or mentions AWS AI deployment, Bedrock Knowledge Base setup, SageMaker endpoint deployment, AWS GenAI implementation, or AWS AI CI/CD pipeline.
Use when the user asks to "plan benefits realization", "define KPIs", "track success metrics", "establish benefits framework", or "measure project value delivery". Activates when a stakeholder needs to link deliverables to business outcomes, define measurable KPIs with targets, design post-project benefit tracking, create a benefits ownership matrix, or establish a sustainability plan for realized benefits.
BI and analytics service discovery — data maturity assessment (DCAM/DMM), dashboard landscape inventory, semantic layer evaluation, self-service analytics readiness, data literacy assessment, analytics use case portfolio, and BI transformation roadmap. Distinct from bi-architecture (design skill); this is the discovery/assessment for BI-as-a-service engagements. Use when the user asks to "assess BI maturity", "evaluate analytics capabilities", "dashboard inventory", "data literacy assessment", "semantic layer review", "self-service analytics readiness", "analytics use case prioritization", "BI transformation roadmap", or mentions BI-as-a-service, analytics maturity, dashboard consolidation, data democratization, DCAM, DMM, or data literacy.
BI solution design — semantic layers, dashboard patterns, self-service analytics, KPI frameworks. Use when the user asks to "design BI architecture", "build a KPI framework", "set up self-service analytics", "design dashboard hierarchy", "create a semantic layer", or mentions metric trees, drill-down patterns, or reporting strategy.
Generates branded DOCX (Word) documents using the MetodologIA Neo-Swiss Design System v6. Uses python-docx to create professional documents with navy headers, gold accents, Poppins headings, and Trebuchet MS body text. Use when the user requests a Word document, DOCX output, or when the ghost menu routes to DOCX.
Generates branded PPTX (PowerPoint) presentations using the MetodologIA Neo-Swiss Design System v6. Uses python-pptx to create slide decks with navy backgrounds, gold accents, Poppins titles, and Trebuchet MS body text. Use when the user requests a presentation, slide deck, PPTX output, or when the ghost menu routes to PPTX format.
Generates branded XLSX (Excel) spreadsheets using the MetodologIA Neo-Swiss Design System v6. Uses openpyxl to create professional spreadsheets with navy headers, gold accent rows, and semantic conditional formatting. Use when the user requests a spreadsheet, XLSX output, or when the ghost menu routes to XLSX.
Use when the user asks to "create a budget", "estimate costs", "define contingency reserves", "build cost breakdown structure", or "establish a cost baseline for EVM". Activates when a stakeholder needs to produce a cost baseline, aggregate bottom-up estimates, calculate contingency and management reserves, generate a time-phased budget with S-curve, or define cost accounts for earned value tracking.
Use when the user asks to "track budget", "monitor costs", "review budget variance", "check contingency burn", or "forecast remaining project costs". Activates when a stakeholder needs to analyze cost variances against baseline, monitor contingency reserve consumption, update budget forecasts, generate burn rate analysis, or produce corrective action recommendations for cost overruns.
Use when the user asks to "plan capacity", "forecast resource demand", "analyze resource availability", "match supply to demand", or "model resource scenarios". Activates when a stakeholder needs to analyze resource supply vs demand, identify capacity gaps, detect over-allocations, build time-phased capacity models, or plan proactive hiring and cross-training decisions before bottlenecks impact delivery.
Use when the user asks to "design ceremonies", "plan meeting cadence", "create facilitation guides", "define ceremony templates", or "optimize meeting calendar". Activates when a stakeholder needs to design a complete ceremony calendar, define time-boxes and agendas per ceremony, create facilitation guides, identify ceremony anti-patterns, or measure ceremony effectiveness across the project lifecycle.
Use when the user asks to "facilitate a ceremony", "run a retrospective", "lead sprint planning", "moderate a meeting", or "design facilitation techniques". Activates when a stakeholder needs facilitation guides for project ceremonies, engagement techniques for team workshops, conflict navigation protocols for heated discussions, anti-pattern recognition during ceremony execution, or ceremony effectiveness measurement.
Use when the user asks to "set up change control", "evaluate change requests", "manage scope changes", "establish CCB governance", or "process a change request". Activates when a stakeholder needs to establish a change control process, create change request templates, define CCB composition and decision criteria, evaluate change impact on scope/schedule/cost, or track change request trends across the project.
Organizational change readiness assessment producing readiness scorecard, resistance map, and intervention plan. Use when the user asks to "assess change readiness", "evaluate organizational readiness", "change impact analysis", "resistance mapping", "ADKAR assessment", "readiness scorecard", or mentions "Phase 5b", "adoption risk", "organizational capacity for change".
CLI interactivo de inicialización que configura el entorno del cliente, pre-puebla discovery/, ejecuta G0 security scan y prepara el contexto para discovery.
Use when the user asks to "audit PM tools visually", "inspect Jira configuration", "review Azure DevOps setup", "check Monday.com boards", or "evaluate tool configuration". Activates when a stakeholder needs to perform a visual audit of PM tool configurations, capture screenshot evidence of misconfigurations, compare tool setup against methodology best practices, identify workflow anti-patterns in PM tools, or produce a remediation roadmap for tool optimization.
Use when the user asks to "close the project", "generate closure report", "document final metrics", "perform administrative closure", or "obtain formal acceptance". Activates when a stakeholder needs to produce a project closure report, compare final actuals vs baseline, compile lessons learned, obtain formal sponsor acceptance, or execute administrative closure including resource release and documentation archiving.
Cloud migration planning -- 7R assessment, workload classification, wave planning, cutover. Use when the user asks to "plan cloud migration", "assess workloads for migration", "design landing zone", "create migration waves", "plan cutover strategy", or mentions 7R, rehost, replatform, refactor, lift-and-shift, or migration factory.
Cloud-native design -- containers, service mesh, serverless, multi-cloud, FinOps. Use when the user asks to "design cloud-native architecture", "containerize the application", "evaluate service mesh", "plan serverless migration", "implement multi-cloud strategy", "optimize cloud costs", or mentions Kubernetes, Istio, Docker, Helm, Terraform, FinOps, or 12-factor.
Cloud-as-a-Service discovery — cloud readiness assessment, DevOps maturity (DORA), cloud operations model, FinOps assessment, cloud security posture, and cloud services roadmap. Distinct from cloud-migration (which covers migration strategy); this covers Cloud as an ongoing service offering. Use when the user asks to "assess cloud operations", "evaluate DevOps maturity", "DORA assessment", "FinOps evaluation", "cloud security posture", "SRE maturity", "cloud operations model", "cloud service roadmap", or mentions cloud-as-a-service, platform engineering, toil reduction, FinOps, cloud cost optimization, or cloud operations.
Business model and value capture strategy — identifies optimal commercial structures for technology engagements beyond T&M. Use when the user asks to "define business model", "structure the deal", "identify value capture", "design pricing strategy", "explore commercial models", or mentions earned value, joint venture, revenue share, outcome-based, licensing model, or commercial structure.
Use when the user asks to "create a communication plan", "define communication matrix", "plan reporting cadence", "design stakeholder communications", or "establish escalation protocols". Activates when a stakeholder needs to design a communication matrix, define channel strategy, create reporting templates, establish escalation communication paths, or measure communication effectiveness across the project.
Competitive technical landscape analysis, technology differentiation assessment, build-vs-buy analysis, and market positioning evaluation. Use when the user asks to "analyze competition", "compare technology options", "build vs buy analysis", or mentions competitive matrix, differentiation map, or market positioning.
Regulatory and standards compliance assessment — GDPR, SOX, PCI-DSS, HIPAA, ISO 27001, NIST CSF. Use when the user asks to "evaluate compliance", "audit regulatory gaps", "assess GDPR readiness", "review PCI-DSS compliance", or mentions regulatory frameworks, data protection, compliance matrix.
Use when the user asks to "track compliance", "audit regulatory requirements", "verify compliance status", "prepare for regulatory audit", or "map compliance requirements". Activates when a stakeholder needs to catalog applicable regulations, map requirements to project activities, design evidence collection processes, track compliance gaps, or prepare documentation packages for external audits and certifications.
Use when the user asks to "resolve stakeholder conflict", "manage team conflict", "mediate disagreements", "navigate political disputes", or "de-escalate team tensions". Activates when a stakeholder needs to classify conflict types, apply resolution techniques, facilitate interest-based negotiation, build coalitions for alignment, or design structural prevention measures to avoid recurring conflicts.
Use when the user asks to "optimize context", "reduce token usage", "prune context window", "configure progressive loading", or "manage session state". Activates when a stakeholder needs to optimize context window usage, configure progressive MOAT loading levels, design intelligent pruning strategies, manage session state persistence, or implement token-efficient skill routing across the agent framework.
Use when the user asks to "plan contingencies", "create fallback plans", "define contingency reserves", "design trigger-response protocols", or "calculate schedule reserves". Activates when a stakeholder needs to develop fallback strategies for high-priority risks, calculate schedule and cost reserves from quantitative analysis, define trigger protocols for rapid contingency activation, or track reserve consumption over time.
Use when the user asks to "improve processes", "run a retrospective analysis", "implement kaizen", "optimize PDCA cycles", or "track improvement implementation". Activates when a stakeholder needs to identify improvement opportunities from project data, apply root cause analysis techniques, prioritize improvements by effort-impact ratio, implement PDCA cycles, or embed improvements into standard processes.
Persuasive writing for executive audiences — value propositions, calls to action, cost-of-inaction narratives, and compelling summaries. Use when generating executive summaries, pitch narratives, scenario value propositions, recommendation justifications, or any prose that must drive a decision.
Cost driver identification — effort inductors, scope drivers, magnitude estimation, team composition modeling, risk-adjusted timeline ranges, service engagement sizing, consulting effort, automation ROI, and staffing model. Use when the user asks to "estimate effort", "identify cost drivers", "size the project", "plan team composition", "identify effort inductors", or mentions WBS, sizing, contingency, burn rate, PERT, Monte Carlo, or "Phase 4" cost work. NEVER produces final prices — produces drivers, ranges, and magnitude indicators with costing disclaimers.
Use when the user asks to "calculate cost of delay", "run WSJF analysis", "prioritize by economic value", "quantify delay impact", or "sequence work by value". Activates when a stakeholder needs to quantify the economic cost of delaying features, apply Weighted Shortest Job First prioritization, transform subjective prioritization into data-driven economic sequencing, or perform sensitivity analysis on priority rankings.
Use when the user asks to "convert skills to Cursor", "export to Codex", "convert to Gemini format", "port skills to another AI platform", or "create multi-platform skills". Activates when a stakeholder needs to convert MOAT skills from Claude Code format to Cursor rules, GitHub Codex AGENTS.md, Google Gemini system instructions, or other AI coding assistant formats while preserving skill logic and evidence protocols.
Use when the user asks to "configure dashboards", "set up data feeds", "design monitoring tools", "automate dashboard updates", or "integrate PM data sources". Activates when a stakeholder needs to configure PM dashboard tooling, set up automated data feeds from PM tools, design visualization components, configure alert thresholds, or establish dashboard refresh cadence and access control.
Data pipeline architecture — ingestion, orchestration, quality, lineage, SLAs. Use when the user asks to "design data pipelines", "architect ingestion", "set up orchestration", "plan data lake", "design lakehouse", or mentions Airflow, Dagster, CDC, data lineage, or pipeline SLAs.
Data governance framework — catalog, ownership, classification, retention, privacy compliance, data mesh. Use when the user asks to "build a data catalog", "define data ownership", "classify sensitive data", "design retention policies", "ensure privacy compliance", "implement data mesh governance", or mentions GDPR, CCPA, LGPD, data stewardship, PII, data lineage, or federated governance.
Data mesh readiness assessment and strategy using Zhamak Dehghani's 4 principles. Use when the user asks to "assess data mesh readiness", "design data mesh strategy", "domain data ownership", "data as a product", "self-serve data platform", "federated data governance", "data mesh migration", or mentions "data decentralization", "data domain ownership", "data product thinking".
Data quality framework — profiling, validation, anomaly detection, data contracts, SLA monitoring. Use when the user asks to "design data quality framework", "set up data contracts", "plan data validation", "detect data anomalies", "define data SLAs", or mentions data profiling, quarantine patterns, or remediation workflows.
ML/AI system design — model lifecycle, feature stores, experiment tracking, model serving, MLOps pipelines. Use when the user asks to "design an ML system", "architect model serving", "set up experiment tracking", "design feature store", "plan MLOps pipeline", or mentions model registry, A/B testing, drift detection, or retraining triggers.
Transforms metrics and findings into meaningful narratives — insight extraction, metrics-to-meaning conversion, comparison framing, and magnitude communication. Use when presenting scoring matrices, coverage metrics, performance data, cost estimates, or any quantitative finding that needs interpretation and context.
Visual data narrative design — chart selection, Mermaid diagram metodologia-storytelling, visual hierarchy, dashboard narratives, and annotation strategy. Use when selecting chart types, designing diagram narratives, building visual sequences for presentations, or annotating data visualizations for maximum comprehension.
Database design — schema patterns, indexing, partitioning, replication, migration, performance tuning. Use when the user asks to "design the database schema", "plan indexing strategy", "set up replication", "partition large tables", "migrate database schema", "tune query performance", or mentions normalization, sharding, B-tree indexes, zero-downtime migration, or connection pooling.
Use when the user asks to "define Definition of Done", "set acceptance criteria", "establish DoD/DoR standards", "define quality standards", or "create completion checklists". Activates when a stakeholder needs to establish Definition of Done criteria at story/feature/release levels, create Definition of Ready checklists, design acceptance criteria templates, define exception handling processes, or plan DoD evolution protocols.
System and library dependency mapping, vulnerability scanning, upgrade risk assessment, and license compliance analysis. Use when the user asks to "map dependencies", "analyze dependency risk", "check license compliance", "assess upgrade risk", or mentions dependency graph, vulnerability scanning, or supply chain security.
Use when the user asks to "map dependencies", "visualize cross-project dependencies", "identify dependency risks", "detect circular dependencies", or "create dependency network diagrams". Activates when a stakeholder needs to catalog inter-project dependencies, visualize dependency networks, identify critical dependency chains, detect circular dependencies, or establish cross-project coordination protocols for dependency management.
MetodologIA branded design system — full-fidelity output templates for HTML, DOCX, XLSX, PPTX, and MD formats. Produces self-contained, accessible, production-ready deliverables in any format using the canonical MetodologIA Neo-Swiss Design System v6 tokens, components, and page templates. Use when generating branded outputs, converting between formats, creating HTML deliverables, building DOCX/PPTX/XLSX from markdown, or establishing brand compliance for any output.
Configurable design system for HTML deliverables with tokens, page structure, and component library. Use when the user asks to "apply design system", "generate styled HTML", "set up brand tokens", "configure brand colors", or mentions "design system", "design tokens", "component library", "brand config", "page template".
Developer experience (DX) platform assessment, inner loop optimization, toolchain evaluation, and onboarding friction analysis. Use when the user asks to "assess developer experience", "optimize inner loop", "evaluate toolchain", or mentions DX scorecard, developer productivity, or cognitive load reduction.
Use when the user asks to "align DevOps with PM", "bridge CI/CD with milestones", "integrate deployment pipelines with project tracking", "map DORA metrics to PM KPIs", or "design release-milestone binding". Activates when a stakeholder needs to map CI/CD pipelines to milestone tracking, align release cadences with sprint ceremonies, correlate DORA metrics with project KPIs, or design an integrated DevOps-PM operating model.
DevSecOps pipeline architecture — CI/CD design, shift-left security, supply chain integrity, release management, and compliance automation. Use when the user asks to "design the CI/CD pipeline", "integrate security into delivery", "set up SBOM and artifact signing", "automate compliance", "measure DORA metrics", or mentions SAST, SCA, DAST, secrets scanning, IaC scanning, canary deployment, or policy-as-code.
Program-level digital transformation discovery — digital maturity assessment, service portfolio mapping, program architecture, change readiness, multi-service integration, program governance, and transformation roadmap. Use when the user asks to "assess digital maturity", "plan digital transformation", "design transformation program", "evaluate change readiness", "map service portfolio", "program governance", "transformation roadmap", or mentions digital transformation, maturity assessment, multi-workstream, program architecture, change management, or transformation program.
DR/BCP planning — RPO/RTO definition, failover design, backup strategies, tabletop exercises. Use when the user asks to "plan disaster recovery", "define RPO/RTO", "design failover", "create BCP", or mentions business continuity, backup strategy, recovery runbook, tabletop exercise.
Discovery-to-execution handover — operational transition package, commercial activation, governance transfer, and Phase 1 kickoff plan. Use when the user asks to "create handover", "transition to operations", "prepare delivery handoff", "activate commercial proposal", "hand off discovery", "prepare operations package", "close discovery engagement", or mentions handover, transition, delivery kickoff, proposal preparation, or discovery close-out.
This skill should be used when the user asks to "run a discovery", "orchestrate the pipeline", "start a consulting engagement", "coordinate the dream team", "plan a discovery session", "manage discovery inputs", or mentions discovery orchestration, phase sequencing, quality gates, data contracts, expert committee, dream team, or consulting pipeline. Always use this skill as the entry point for any discovery engagement — it coordinates all other skills.
Use when the user asks to "run a project discovery retrospective", "review discovery outcomes", "assess discovery effectiveness", "calibrate pipeline parameters", or "measure discovery quality". Activates when a stakeholder needs to conduct a quantitative post-discovery review, measure pipeline execution quality, assess deliverable completeness, evaluate estimation accuracy, or update APEX pipeline parameters based on retrospective findings.
Doc-as-code strategy design, documentation taxonomy, content governance, and knowledge base architecture. Use when the user asks to "design documentation strategy", "build knowledge base", "create doc-as-code pipeline", or mentions documentation governance, content taxonomy, or technical writing standards.
Use when the user asks to "consult a methodology expert", "get methodology advice", "switch methodology perspective", "resolve a methodology debate", or "get framework-specific guidance". Activates when a stakeholder needs adaptive methodology guidance, framework-specific practice recommendations, methodology debate resolution, anti-pattern diagnosis and remediation, or contextual advice that shifts persona based on declared project methodology.
Context-adaptive industry expert that dynamically adopts the right SME lens based on client sector. Use when the user asks to "add industry context", "act as domain expert", "give me the banking/retail/health perspective", or mentions "SME", "subject matter expert", "industry lens", "sector analysis", "regulatory context".
Earned Value Management analysis — CPI, SPI, EAC forecasting, trend analysis, S-curve visualization. Use when the user asks to "run EVM analysis", "calculate CPI/SPI", "forecast EAC", "track earned value", "measure project performance", or mentions earned value management, CPI, SPI, EAC, ETC, TCPI, BAC, cost performance, schedule performance, variance analysis.
Use when the user asks to "create engagement strategy", "plan stakeholder engagement", "design influence approach", "manage stakeholder resistance", or "build coalition support". Activates when a stakeholder needs to design targeted engagement strategies, move stakeholders from current to desired engagement levels, build champion coalitions, analyze and respond to resistance, or track engagement effectiveness over time.
Enterprise architecture alignment — capability mapping, domain decomposition, governance, technology radar, and strategic initiative roadmap. Use when the user asks to "map business capabilities", "build a technology radar", "define architecture governance", "prioritize strategic initiatives", "design team topologies", or mentions DDD domains, ARB, DORA metrics, maturity models, or target operating model.
Event-driven architecture — event catalog, schema registry, eventual consistency, saga, CQRS, event sourcing. Use when the user asks to "design event-driven system", "build event catalog", "implement CQRS", "design saga patterns", "set up schema registry", "implement event sourcing", or mentions Kafka, RabbitMQ, Pulsar, event bus, dead-letter queue, consumer groups, or event replay.
Execution tracking with 1-day sprints per developer, burndown charts (Atlassian-style), velocity tracking using the MetodologIA productivity model (1 FTE = 1 shippable feature/day from Sprint 2). Sprint 1 = onboarding. Produces burndown dashboards, velocity reports, and completion projections. Use when dimensioning execution effort, tracking delivery velocity, creating burndown projections, or when "burndown", "velocity", "sprints diarios", "1 feature por día", or "tracking de ejecución" is mentioned.
Use when the user asks to "prepare executive summary", "brief the sponsor", "create sponsor update", "write C-level presentation", or "produce steering committee report". Activates when a stakeholder needs to produce decision-focused executive briefings, distill complex project data into 5-minute reads, present RAG status with strategic alignment, frame decisions with options and recommendations, or prepare steering committee materials.
Use when the user asks to "create executive dashboard", "build C-level view", "design KPI dashboard", "produce management dashboard", or "configure portfolio health view". Activates when a stakeholder needs to design a C-level dashboard showing project/portfolio health, select and configure KPI visualizations, create drill-down capability for areas of concern, or establish dashboard refresh cadence and governance.
C-level executive pitch with financial modeling and persuasion architecture. Use when the user asks to "create a pitch", "build a business case", "justify the investment", "present to executives", "ROI analysis", "executive summary", or mentions "C-level presentation", "budget approval", "NPV", "IRR", "payback period", "business case", "Phase 5b".
Use when the user asks to "build a business case", "calculate NPV", "analyze ROI", "run cost-benefit analysis", or "produce financial justification for a project". Activates when a stakeholder needs to produce a financial business case with NPV/IRR/payback analysis, build discounted cash flow models, perform sensitivity analysis on key assumptions, model best/most-likely/worst-case scenarios, or present go/no-go financial recommendations.
Cloud financial operations assessment and strategy using FinOps Foundation framework. Use when the user asks to "assess cloud costs", "optimize cloud spending", "FinOps assessment", "cloud cost analysis", "rightsizing", "reservation strategy", "showback/chargeback model", "cloud unit economics", "cost allocation", or mentions "cloud financial management", "cost optimization", "FinOps maturity".
DDD domain taxonomy + 8-12 end-to-end business flows with trama specifications, process mapping, service flow documentation, and operational flow tracing. Use when the user asks to "map flows", "document business processes", "trace integrations", "identify failure points", "domain mapping", "DDD analysis", or mentions "Phase 2", "flow mapping", "integration matrix", "dependency graph", "swimlane diagrams", "business process documentation".
Comprehensive functional specification with use cases, business rules, and complexity/risk matrix, service specification, deliverable specification, and engagement spec. Use when the user asks to "write functional specs", "document use cases", "define business rules", "create requirements", "specification document", or mentions "Phase 5a", "functional specification", "MVP scope", "acceptance criteria", "casos de uso", "reglas de negocio".
Functional analysis toolkit with 6 tools for requirements engineering. Use when the user asks to "run event storming", "create a story map", "extract business rules", "write acceptance criteria", "build traceability matrix", "detect anti-patterns", or mentions "Given/When/Then", "functional toolbelt", "requirements quality".
Use when the user asks to "request funding", "justify budget", "prepare investment proposal", "build capital request", "draft funding justification", or mentions funding request, budget justification, capital request, investment proposal. Triggers on: prepares an investment case, drafts a budget approval package, justifies project funding, builds a capital expenditure request, creates a funding drawdown schedule.
Generative AI architecture — RAG patterns, LLM orchestration, multi-model tiering, agent workflow design, vector database architecture, knowledge connectors, and GenAI quality assurance. This skill should be used when the user asks to "design RAG architecture", "architect LLM system", "select vector database", "design AI agents", "implement knowledge retrieval", "plan GenAI quality", or mentions RAG, embeddings, vector search, LLM orchestration, agent framework, context-aware generation, hallucination reduction, or multi-model routing.
Use when the user asks to "define governance", "create governance model", "set up escalation paths", "design authority matrix", "establish decision framework", or mentions project governance, steering committee, decision framework, authority levels, escalation matrix. Triggers on: builds a governance charter, designs escalation paths, defines decision-making authority, creates steering committee structure, maps authority levels.
---
Use when the user asks to "assess hybrid methodology readiness", "evaluate hybrid approach", "check hybrid methodology maturity", "measure integration capability", "diagnose water-scrum-fall", or mentions hybrid assessment, hybrid readiness, mixed methodology evaluation, iterative-sequential integration maturity. Triggers on: evaluates hybrid methodology maturity, detects hybrid anti-patterns, scores integration capability, assesses dual-governance readiness, produces hybrid adoption roadmap.
Use when the user asks to "design a hybrid approach", "combine agile and waterfall", "create hybrid methodology", "integrate iterative and sequential delivery", "build adaptive lifecycle", or mentions hybrid PM, water-scrum-fall, bimodal, agile-traditional blend, adaptive lifecycle. Triggers on: designs a hybrid methodology, maps components to delivery approaches, creates interface agreements between agile and waterfall, unifies governance across methodologies, blends iterative and predictive planning.
Use when the user asks to "test a hypothesis", "validate assumptions through delivery", "run experiment-driven project", "design build-measure-learn cycles", "validate project assumptions", or mentions hypothesis-driven delivery, HDD, validated learning, experiment design, build-measure-learn. Triggers on: converts assumptions into testable hypotheses, designs minimum viable experiments, facilitates pivot-or-persevere decisions, documents validated learning, ranks hypotheses by risk and impact.
Hypothesis-Driven Development (HDD) framework for structuring modernization proposals as testable hypotheses with Lean Startup cycles (Build-Measure-Learn). Transforms features into hypotheses with metrics, experiments, and kill/pivot/persevere thresholds. Use when formulating scenarios as hypotheses, designing validation experiments, applying Lean Startup to discovery, or when "HDD", "hypothesis", "hipótesis", "lean startup", "build-measure-learn", "experiment", "kill/pivot/persevere", or "validación de hipótesis" is mentioned.
Incident response framework — severity classification, escalation paths, postmortem templates. Use when the user asks to "design incident process", "define severity levels", "create escalation paths", "build postmortem template", or mentions incident response, on-call, war room, blameless postmortem.
Infrastructure and platform architecture — compute, network, storage, HA/DR, IAM, cloud landing zones, and cost optimization. Use when the user asks to "design cloud infrastructure", "plan network topology", "define HA/DR strategy", "set up cloud landing zones", "optimize cloud costs", or mentions VPC, Kubernetes, serverless, multi-AZ, IAM, reserved instances, or chaos testing.
Use when the user asks to "analyze project inputs", "process documents", "extract requirements", "review project brief", "parse RFP content", or mentions input processing, document analysis, requirement extraction, project brief analysis. Triggers on: analyzes project input documents, extracts structured requirements from briefs, detects contradictions in source documents, normalizes project inputs for planning, produces input completeness scorecard.
System integration patterns — point-to-point, ESB, iPaaS, event mesh, API contract management, data mapping. Use when the user asks to "design integrations", "map system connections", "define API contracts", "plan event-driven integration", or mentions ESB, iPaaS, MuleSoft, API gateway, event mesh, data mapping.
Use when the user asks to "plan integration", "map cross-project dependencies", "define interface agreements", "coordinate between projects", "manage cross-team dependencies", or mentions integration management, cross-project coordination, interface contracts. Triggers on: maps integration points between components, defines interface data contracts, creates dependency matrices, designs cross-project coordination protocols, produces integration verification checklists.
Use when the user asks to "track issues", "manage project issues", "resolve blockers", "create issue log", "remove impediments", or mentions issue tracking, issue resolution, blocker management, impediment removal, issue escalation. Triggers on: creates issue tracking workflow, assigns issue resolution owners, enforces resolution SLAs, captures root cause analysis, produces issue trend analysis.
Use when the user asks to "configure Jira", "set up Azure DevOps", "design PM tool workflows", "create board configuration", "map tool to methodology", or mentions Jira configuration, Azure DevOps setup, PM tool setup, workflow design, board configuration. Triggers on: designs PM tool project structure, creates workflow state machines, configures board columns and swimlanes, maps methodology ceremonies to tool features, produces tool user guides.
Use when the user asks to "assess Kanban maturity", "evaluate Kanban practices", "check flow efficiency", "measure WIP discipline", "diagnose Kanban health", or mentions Kanban assessment, Kanban maturity, flow metrics evaluation, WIP limit assessment, Kanban readiness. Triggers on: scores Kanban maturity against KMM levels, evaluates flow health metrics, assesses WIP limit enforcement, detects Kanban anti-patterns, produces evolutionary improvement roadmap.
Use when the user asks to "design a Kanban board", "set WIP limits", "improve flow", "measure lead time", "optimize throughput", or mentions Kanban, flow metrics, cumulative flow diagram, pull system, WIP limits, cycle time. Triggers on: designs Kanban board layout, calculates initial WIP limits, defines pull policies, establishes flow measurement framework, produces Kanban system design document.
Use when the user asks to "prepare kickoff", "create kickoff deck", "plan team alignment", "design project launch meeting", "build team charter", or mentions kickoff package, project kickoff, team alignment meeting, ground rules, team charter. Triggers on: creates kickoff presentation deck, designs team alignment agenda, facilitates ground rules agreement, produces communication quick-reference, compiles team charter from kickoff outcomes.
Use when the user asks to "implement LeSS", "set up Nexus", "scale Scrum for multiple teams", "coordinate multi-team delivery", "unify product backlog across teams", or mentions LeSS, Nexus, multi-team Scrum, cross-team coordination, integrated increment. Triggers on: designs multi-team Scrum scaling, configures shared product backlog, establishes cross-team coordination events, creates integration strategy for 2-8 teams, produces scaling metrics dashboard.
Use when the user asks to "capture lessons learned", "document project lessons", "build lessons register", "create knowledge base from project experience", "extract reusable insights", or mentions lessons learned, knowledge capture, lessons register, project learning, organizational memory. Triggers on: captures lessons from retrospectives, categorizes lessons by domain, creates searchable lessons register, distributes knowledge to future projects, rates lesson impact.
Management and consulting discovery — PMO maturity assessment, methodology fitness evaluation, team capability analysis, governance model assessment, delivery performance baseline, Factor WOW assessment, and management transformation roadmap. Use when the user asks to "assess PMO maturity", "evaluate project management practices", "management discovery", "methodology assessment", "governance evaluation", "delivery performance analysis", "Factor WOW assessment", "management transformation", "agile maturity", "SAFe assessment", "PMO setup", or mentions "Disciplined Agile", "delivery excellence", "management consulting", "project governance", "ceremony health".
Use when the user asks to "assess PM maturity", "evaluate project management capability", "run OPM3 assessment", "check P3M3 level", "benchmark organizational PM capability", or mentions PM maturity, organizational PM maturity, OPM3, P3M3, project management maturity model, PM capability assessment. Triggers on: scores PM maturity against established frameworks, produces capability heat maps, identifies improvement priorities, creates strategic maturity roadmap, benchmarks against industry standards.
Mentoring and training discovery — capability assessment, learning path design, knowledge transfer planning, training delivery model, measurement framework, and training roadmap. Use when the user asks to "assess training needs", "design learning paths", "plan knowledge transfer", "evaluate mentoring program", "training gap analysis", "capability assessment", "upskilling plan", or mentions "training discovery", "mentoring readiness", "talent development", "MetodologIA University".
This skill should be used when the user asks to "create diagrams", "generate Mermaid", "visualize architecture", "diagram flows", "draw a sequence diagram", "create a C4 diagram", "add visual diagrams", or mentions diagramming, visualization, flowcharts, sequence diagrams, Mermaid syntax, architecture diagrams, or visual documentation. Use this skill to embed precise, syntactically valid Mermaid diagrams in any discovery deliverable.
Use when the user asks to "assess methodology fit", "select PM methodology", "evaluate agile vs waterfall", "determine best approach", "score methodology options", or mentions methodology selection, framework comparison, agile readiness, approach evaluation. Triggers on: evaluates project characteristics against methodology criteria, produces weighted scoring matrix, recommends best-fit methodology with confidence level, identifies organizational readiness gaps, generates tailoring guidance.
Use when the user asks to "create a methodology playbook", "define project ceremonies", "design cadences and rituals", "build a Definition of Done", "operationalize methodology", or mentions methodology playbook, ceremony design, cadence definition, methodology selection, DoD, project rituals. Triggers on: codifies methodology into actionable playbook, designs ceremonies with agendas and durations, creates Definition of Done per deliverable type, maps roles to ceremonies, produces ceremony calendar.
Detailed migration execution guide — strangler fig, parallel run, big bang, rollback procedures, data migration. Use when the user asks to "plan migration", "design cutover", "build migration playbook", "define rollback strategy", or mentions strangler fig, parallel run, data migration, legacy modernization.
Mini Apps and Low-Code discovery — citizen developer readiness, platform assessment (Power Platform, OutSystems, Mendix, Retool), use case identification and prioritization, governance model, integration architecture, and low-code adoption roadmap. Use when the user asks to "evaluate low-code platforms", "assess citizen developer readiness", "mini apps strategy", "Power Platform assessment", "low-code governance", "no-code evaluation", "automation apps discovery", or mentions "citizen development", "low-code adoption", "mini apps".
Mobile app architecture -- native vs cross-platform, offline-first, state management, release management. Use when the user asks to "design mobile architecture", "choose between native and cross-platform", "implement offline-first", "plan mobile CI/CD", "optimize app performance", or mentions Flutter, React Native, KMP, MVVM, SwiftUI, Jetpack Compose, or app store deployment.
AS-IS assessment for mobile apps -- performance, compliance, dependency health, UX metrics. Use when the user asks to "assess the mobile app", "audit app health", "review app dependencies", "check app store compliance", "measure app performance", or mentions crash rate, ANR, app size, cold start time, or mobile tech debt.
Unified mobile platform assessment — merges former mobile-architecture and mobile-assessment into one skill. Covers cross-platform vs native strategy, store compliance, app vitals, architecture patterns, offline-first design, performance optimization, dependency health, and remediation roadmaps. Use when the user asks to "assess mobile architecture", "evaluate mobile platform", "audit app health", "choose between native and cross-platform", "check store compliance", "optimize mobile performance", "review app dependencies", or mentions Flutter, React Native, KMP, MVVM, crash rate, ANR, app size, cold start time, or mobile tech debt.
Use when the user asks to "run Monte Carlo", "simulate schedule risk", "probabilistic cost analysis", "confidence intervals", "forecast completion probability", or mentions Monte Carlo simulation, probabilistic analysis, schedule confidence, cost confidence. Triggers on: executes probabilistic schedule simulation, generates cost confidence curves, calculates contingency reserves from P-values, identifies sensitivity drivers via tornado diagram, produces S-curves with confidence levels.
Deep feasibility validation across 7 dimensions by a Council of Seven Sages (Think Tank). Postdoctoral-level research rigor applied to scenario validation. Validates technical claims, quantitative assumptions, systemic risks, technology maturity, infrastructure limits, integration feasibility, and economic viability. Use when validating approved scenarios before roadmap commitment, when stakeholders need confidence in technical achievability, or when "Phase 3b" / "feasibility" / "think tank" / "7 sabios" is mentioned.
Observability architecture — logging, tracing, metrics, alerting, SLO/SLI, incident response. Use when the user asks to "design observability", "set up monitoring", "implement tracing", "configure alerting", "define SLOs", "design incident response", or mentions OpenTelemetry, Prometheus, Grafana, ELK, correlation IDs, burn rate, runbooks.
Use when the user asks to "create onboarding plan", "plan knowledge transfer", "design team onboarding", "reduce ramp-up time", "capture institutional knowledge", or mentions onboarding playbook, knowledge transfer, new team member ramp-up, team integration. Triggers on: creates role-specific onboarding paths, designs knowledge transfer sessions, establishes buddy system, defines ramp-up milestones, captures institutional knowledge for preservation.
Use when the user asks to "manage positive risks", "exploit opportunities", "enhance project benefits", "capture upside potential", "optimize project outcomes", or mentions positive risk, opportunity exploitation, opportunity enhancement, upside risk management. Triggers on: identifies upside potential in project execution, applies exploit/share/enhance/accept strategies, quantifies opportunity value, integrates opportunity actions into project plan, tracks opportunity realization.
Use when the user asks to "plan change management", "implement ADKAR", "manage organizational change", "plan adoption", "design resistance management", or mentions OCM, organizational change management, ADKAR, change readiness, adoption planning. Triggers on: designs ADKAR-based change interventions, assesses change readiness, creates communication campaigns, builds training plans, manages resistance through structured interventions, measures adoption KPIs.
Use when the user asks to "produce deliverables", "convert formats", "generate multi-format output", "apply naming conventions", "manage deliverable pipeline", or mentions output engineering, format conversion, deliverable production, markdown to HTML, multi-format generation. Triggers on: converts markdown to branded HTML, applies evidence tagging to deliverables, enforces naming conventions, manages version tagging (WIP/Aprobado), produces deliverables in multiple formats simultaneously.
Performance assessment — load testing, capacity planning, bottleneck analysis, caching, CDN, SLAs. Use when the user asks to "analyze performance", "design load tests", "plan capacity", "optimize caching", "configure CDN", "define SLAs", "find bottlenecks", or mentions latency, throughput, p95, saturation, cache hit ratio, edge compute.
Discovery pipeline governance — phase gate management, resource orchestration, dependency control, and proposal QA validation across the entire discovery pipeline. Replaces former project-program-management (not generic PM, specific to discovery pipeline). Use when the user asks to "track the discovery", "govern the pipeline", "validate the proposal", "run governance check", "check phase dependencies", "coordinate resources", or mentions pipeline governance, phase gates, proposal readiness, milestone tracking, or cross-phase dependency management. Works as the structural glue that holds the entire discovery pipeline together — from Phase 0 through Handover.
Use when the user asks to "assess PMO effectiveness", "evaluate PMO value", "review PMO structure", "measure PMO impact", "audit PMO services", or mentions PMO assessment, PMO evaluation, PMO capability review, PMO performance assessment, PMO value assessment. Triggers on: evaluates PMO operating model effectiveness, measures stakeholder value perception, assesses PMO service catalog maturity, quantifies PMO impact on project success rates, produces PMO transformation roadmap.
Use when the user asks to "check PMO health", "run PMO health check", "diagnose PMO performance", "audit PMO operations", "measure PMO KPIs", or mentions PMO health, PMO diagnostics, PMO pulse check, PMO operational review, PMO internal audit. Triggers on: conducts 8-dimension PMO health check, compiles RAG health dashboard, identifies corrective actions for underperforming dimensions, tracks quarter-over-quarter trends, produces PMO operational improvement plan.
Use when the user asks to "assess PMO maturity", "evaluate PM maturity", "run OPM3 assessment", "P3M3 assessment", "benchmark PMO capability", or mentions PMO maturity model, organizational PM maturity, capability maturity, OPM3, P3M3. Triggers on: scores PMO maturity against OPM3 or P3M3 frameworks, produces maturity radar charts, identifies improvement priorities by strategic impact, designs multi-year maturity roadmap, estimates improvement investment in FTE-months.
Use when the user asks to "set up a PMO", "design PMO", "create PMO charter", "implement project management office", "define PMO operating model", or mentions PMO design, project management office, PMO operating model, PMO governance, PMO implementation. Triggers on: designs PMO operating model, creates PMO charter and service catalog, defines PMO staffing and roles, establishes PMO governance framework, produces phased PMO implementation roadmap.
Use when the user asks to "run a PoC", "prototype a solution", "test a tool", "evaluate methodology feasibility", "compare vendor options", or mentions proof of concept, PoC, prototype, tool evaluation, methodology pilot, controlled experiment. Triggers on: designs controlled PoC experiments, defines measurable success criteria, creates evaluation frameworks for tool comparison, facilitates evidence-based go/no-go decisions, documents scale-up risks.
Use when the user asks to "assess portfolio management maturity", "evaluate portfolio governance", "review portfolio practices", "benchmark portfolio capability", "score portfolio management", or mentions portfolio assessment, portfolio maturity, portfolio management capability, portfolio governance evaluation. Triggers on: assesses portfolio management maturity across 6 dimensions, evaluates strategic alignment effectiveness, reviews prioritization model quality, quantifies portfolio governance gaps, produces portfolio improvement roadmap.
Use when the user asks to "create portfolio dashboard", "report portfolio status", "generate portfolio heatmap", "build executive portfolio view", "aggregate project metrics", or mentions portfolio reporting, portfolio view, portfolio metrics, multi-project dashboard. Triggers on: aggregates project health into portfolio heatmap, produces resource utilization views, creates budget rollup summaries, visualizes risk concentration across portfolio, generates governance action items for steering committee.
Use when the user asks to "prioritize projects", "score portfolio", "rank investments", "build scoring model", "optimize portfolio mix", or mentions portfolio prioritization, scoring models, strategic alignment scoring, portfolio ranking, investment prioritization. Triggers on: builds weighted scoring models for project prioritization, calculates efficient frontier for portfolio optimization, runs sensitivity analysis on rankings, facilitates data-driven investment decisions, produces ranked portfolio with transparent scoring.
Use when the user asks to "assess portfolio risk", "aggregate project risks", "analyze portfolio risk exposure", "detect risk concentration", "model systemic risk", or mentions portfolio risk, aggregated risk, risk concentration, systemic risk, portfolio risk management. Triggers on: aggregates risk exposure across project portfolio, identifies correlated risks across projects, detects vendor/technology/resource concentration, models portfolio-level risk scenarios, produces portfolio risk heatmap for governance.
Use when the user asks to "forecast project completion", "predict cost overrun", "model risk probability", "run Monte Carlo on schedule", "generate confidence intervals", or mentions predictive analytics, ML forecasting, schedule prediction, cost forecasting, risk materialization prediction. Triggers on: produces probabilistic schedule forecasts, calculates cost-at-completion with confidence ranges, models risk materialization probability, identifies early warning indicators, generates P50/P80/P95 confidence intervals.
Use when the user asks to "plan procurement", "make-or-buy analysis", "generate RFP", "evaluate vendors", "define vendor criteria", "select contract type", or mentions procurement, sourcing, contract types, vendor selection, outsourcing decisions. Triggers on: produces make-or-buy decision matrices, drafts RFP templates with evaluation scorecards, recommends contract types per procurement item, creates procurement timelines, designs vendor evaluation criteria.
Product roadmap prioritization, backlog strategy, value stream mapping, product-market fit validation. Use when the user asks to "define product roadmap", "prioritize backlog", "map value streams", "validate product-market fit", or mentions product vision, RICE scoring, opportunity trees, dual-track agile.
Use when the user asks to "manage a program", "coordinate multiple projects", "track program benefits", "align program governance", "consolidate program risks", or mentions program management, multi-project coordination, program benefits, program governance, cross-project dependencies, benefits realization tracking.
Use when the user asks to "create a project charter", "define project objectives", "build a business case", "document success criteria", "formalize project authorization", or mentions charter, project initiation, sponsor approval, project justification, SMART objectives, project kickoff document.
Use when the user asks to "check project health", "run health assessment", "evaluate project status", "generate RAG scorecard", "diagnose project problems", or mentions health check, project diagnostics, RAG status, project vital signs, project wellness, leading indicator assessment.
This skill should be used when the user asks to "run a project pipeline", "orchestrate PM workflow", "start a project engagement", "coordinate the PM team", "plan a project lifecycle", "manage project inputs", "sequence project phases", or mentions project orchestration, phase sequencing, quality gates, data contracts, expert committee, PMO pipeline, consulting engagement. Always use this skill as the entry point for any PMO-APEX engagement.
PMO governance backbone — portfolio tracking, phase gate management, resource orchestration, dependency control, and proposal QA validation across the entire discovery pipeline. Use when the user asks to "track the discovery", "manage the portfolio", "validate the proposal", "run governance check", "check phase dependencies", "coordinate resources", or mentions PMO, program management, portfolio governance, phase gates, proposal readiness, milestone tracking, or cross-phase dependency management. Works as the structural glue that holds the entire discovery pipeline together — from Phase 0 through Handover.
Use when the user asks to "evaluate project feasibility", "decide go/no-go", "assess project viability", "screen project proposals", "prioritize project investments", or mentions project selection, feasibility gate, go/no-go decision, project screening, investment gate, weighted scoring model.
QA-as-a-Service discovery — quality maturity assessment (TMMi), test coverage analysis, tool landscape evaluation, PITT methodology alignment, team composition modeling, test factory design, and QA transformation roadmap. Use when the user asks to "assess QA maturity", "evaluate testing practices", "QA service discovery", "test factory design", "TMMi assessment", "QA transformation", "testing maturity evaluation", "PITT methodology", "QA team composition", "test automation assessment", "quality engineering assessment", or mentions "independent testing", "QA-as-a-Service", "test industrialization", "ISTQB".
Use when the user asks to "audit quality", "verify compliance", "review quality processes", "inspect deliverable conformance", "check regulatory adherence", or mentions quality audit, compliance verification, process audit, quality review, non-conformance assessment, corrective action planning.
Strategic quality engineering framework covering test strategy, automation architecture, quality gates, metrics, and shift-left practices. Use when the user asks to "design test strategy", "plan quality gates", "set up test automation", "assess quality maturity", "define quality metrics", or mentions "test pyramid", "shift-left", "CI/CD quality", "automation architecture", "quality engineering".
Use when the user asks to "create a quality plan", "define QA processes", "establish quality metrics", "design quality control activities", "set acceptance criteria", or mentions quality management, QA plan, quality assurance, quality control, quality standards, continuous quality improvement.
Use when the user asks to "create a RACI matrix", "define responsibilities", "assign decision rights", "clarify roles", "map accountability", or mentions RACI, responsibility assignment, accountability matrix, decision rights, RASCI, role ambiguity resolution, authority mapping.
Release management approach design, deployment pattern selection (blue-green, canary, rolling), and rollback procedure definition. Use when the user asks to "design release strategy", "define deployment patterns", "plan rollback procedures", or mentions trunk-based development, GitFlow, feature flags, or CI/CD pipeline strategy.
Use when the user asks to "render to PNG", "convert to PDF", "export Mermaid diagrams", "generate printable deliverables", "create branded exports", or mentions rendering engine, Mermaid-to-PNG, markdown-to-PDF, format rendering, export engine, visual format conversion.
Use when the user asks to "optimize resources", "level resources", "run what-if scenarios", "resolve over-allocations", "balance resource demand", or mentions resource leveling, resource smoothing, optimization, what-if analysis, resource allocation optimization, capacity balancing.
Use when the user asks to "plan resources", "allocate team", "create RACI", "define team structure", "capacity planning", "staff the project", or mentions resource allocation, team roles, staffing, organizational chart, responsibility matrix, resource histogram, capacity management.
Use when the user asks to "run a retrospective", "facilitate a retro", "conduct Start-Stop-Continue", "run a 4Ls retro", "facilitate a Sailboat retro", "analyze sprint improvement data", or mentions retrospective engine, structured retrospective, sprint retro, team reflection, improvement commitment tracking.
Use when the user asks to "define risk appetite", "set risk tolerance", "establish risk thresholds", "calibrate organizational risk levels", "create risk acceptance criteria", or mentions organizational risk tolerance, risk appetite statement, risk capacity, risk attitude, risk threshold matrix.
Proactive risk controller and financial vigilance — operates as an anxious CPA/PM hybrid that anticipates worst-case scenarios at every discovery step, stress-tests assumptions, tracks risk exposure, and feeds better insights back into each phase. Use when the user asks to "assess risks", "stress-test the plan", "validate assumptions", "run worst-case analysis", "check what could go wrong", "audit the discovery", or mentions risk register, contingency planning, assumption validation, exposure analysis, risk appetite, worst-case scenarios, financial controls, or "what keeps you up at night". The paranoid voice that makes the discovery reliable and the proposal trustworthy.
Use when the user asks to "monitor risks", "track risk triggers", "update risk dashboard", "review risk status", "assess risk response effectiveness", or mentions risk monitoring, risk tracking, trigger tracking, risk dashboard, risk escalation, emerging risk detection.
Use when the user asks to "quantify risks", "run Monte Carlo", "calculate EMV", "perform sensitivity analysis", "estimate contingency reserves", or mentions risk quantification, expected monetary value, decision tree, tornado diagram, probabilistic analysis, confidence intervals.
Risk register creation and identification — probability/impact assessment, RBS categorization, risk ownership. Use when the user asks to "create a risk register", "identify risks", "categorize risks", "build risk list", "assess project risks", or mentions risk identification, risk categorization, RBS, risk breakdown structure, risk inventory, probability-impact matrix, risk scoring.
Use when the user asks to "plan risk responses", "create mitigation strategies", "define risk treatments", "design contingency plans", "assign risk owners", or mentions risk mitigation, risk transfer, risk avoidance, risk acceptance, response strategies, trigger-response mapping.
Execution roadmap generator with sprint breakdown, prerequisites, gates, team/budget, and risk register. Use when the user asks to "create a roadmap", "plan a PoC", "build sprint plan", "execution timeline", or mentions "proof of concept", "MVP plan", "milestones", "sprint breakdown", "iteracion", "go/no-go".
RPA and process automation discovery — process landscape assessment, automation opportunity scoring, bot design architecture, platform evaluation, process mining, ROI projection, and automation roadmap. Use when the user asks to "evaluate RPA readiness", "assess automation opportunities", "process automation discovery", "bot architecture design", "RPA platform comparison", "automation roadmap", "process mining analysis", "identify automation candidates", "RPA ROI analysis", or mentions "robotic process automation", "attended/unattended bots", "automation CoE", "process digitization".
Use when the user asks to "assess SAFe maturity", "evaluate SAFe implementation", "check SAFe readiness", "audit ART health", "measure business agility", or mentions SAFe assessment, SAFe maturity, SAFe adoption evaluation, ART readiness, SAFe implementation review, SAFe competency radar.
Use when the user asks to "implement SAFe", "plan a PI", "set up an ART", "design value streams", "configure portfolio Kanban", or mentions SAFe, PI Planning, Agile Release Train, portfolio Kanban, value stream mapping, program increment, scaled agile implementation.
Evaluates 3+ modernization scenarios using Tree of Thought with 6-dimension weighted scoring. Use when the user asks to "compare scenarios", "evaluate options", "run scenario analysis", "Tree of Thought", "which approach should we take", "compare architectures", or mentions "Phase 3", "strategic analysis", "trade-off analysis", "SWOT comparison".
Use when the user asks to "create a schedule", "build a Gantt chart", "define critical path", "plan milestones", "establish timeline", "estimate durations with PERT", or mentions scheduling, dependencies, float, lead/lag, fast-tracking, crashing, schedule baseline, 3-point estimation.
Use when the user asks to "create a WBS", "decompose scope", "define work breakdown structure", "document scope statement", "set project boundaries", "identify deliverables", or mentions scope definition, deliverable decomposition, work packages, scope baseline, exclusions, 100% rule.
Use when the user asks to "implement Scrum", "plan sprints", "define ceremonies", "set up Scrum artifacts", "design sprint cadence", or mentions Scrum, sprint planning, daily standup, sprint review, retrospective, product backlog, sprint backlog, Definition of Done, velocity tracking.
Use when the user asks to "scan for secrets", "detect credentials", "sanitize sensitive data", "check for exposed passwords", "run security gate G0", or mentions secret detection, credential scanning, security gate G0, sensitive data masking, API key exposure, token detection.
Industry/sector intelligence analysis — context-adaptive expert that provides sector-specific insights, regulatory context, benchmarks, and risk overlays. Replaces former dynamic-sme. Use when the user asks to "add industry context", "analyze sector", "give me the banking/retail/health perspective", or mentions "sector intelligence", "industry analysis", "industry lens", "sector analysis", "regulatory context".
Security architecture design — threat modeling, zero trust, identity, encryption, compliance. Use when the user asks to "design security architecture", "model threats", "implement zero trust", "design IAM", "plan encryption strategy", "map compliance requirements", or mentions STRIDE, OWASP, OAuth, RBAC, SOC2, ISO27001, PCI-DSS.
Use when the user asks to "analyze skills gaps", "assess team capabilities", "plan training", "evaluate competency readiness", "identify capability shortfalls", or mentions skills inventory, capability assessment, competency gap, training needs analysis, skill proficiency mapping.
SLO/SLA/SLI definition — error budget policies, reliability targets, customer-facing commitments. Use when the user asks to "define SLAs", "design SLOs", "set reliability targets", "create error budget policy", or mentions SLI, service level, uptime, nines, error budgets.
BPMN 2.0 process modeling and analysis skill for AS-IS/TO-BE business process documentation, bottleneck identification, automation opportunity assessment, process maturity scoring, and process improvement design. Use whenever the user mentions process mapping, BPMN, business process, process flow, swimlane, AS-IS process, TO-BE process, process improvement, operational workflow, delivery monitoring, process maturity, or needs to model how work flows through an organization. Especially relevant for SAP fit-to-standard workshops, IT services company operations, and service variant analysis. Also trigger for RACI assignment, automation ROI, or compliance audit trail. Trigger: BPMN, process mapping, AS-IS TO-BE, process flow, swimlane, process maturity, automation ROI, RACI, fit-to-standard, process improvement, operational workflow.
Regional finance and accounting standards skill covering Colombia (NIIF/DIAN/CTC), Ecuador (SRI/USD dollarization), Mexico (SAT/CFDI), United States (GAAP/ASC 606), Spain (AEAT/SII), and pan-Americas considerations. Use whenever the user mentions financial regulations, tax compliance, electronic invoicing, transfer pricing, CTC calculation, intercompany billing, multi-currency management, localization requirements, withholding taxes, or labor cost structures for IT services companies operating across these regions. Essential for SAP localization configuration and fit-to-standard financial workshops. Also trigger when discussing cost-vs-sale segregation, Activity Type cost rates, margin visibility, arm's length pricing, or any cross-border billing. Trigger: CTC calculation, transfer pricing, intercompany billing, tax compliance, e-invoicing, SAP localization, withholding taxes, Activity Type rates, margin visibility.
SAP S/4HANA implementation skill covering module selection (CO, SD, PS, FI, HCM), SAP Activate methodology, fit-to-standard workshops, multi-country localization, intercompany configuration, and professional services industry patterns. Use whenever the user mentions SAP, S/4HANA, SAP implementation, fit-to-standard, SAP modules, SAP localization, SAP migration, ERP implementation, or needs guidance on SAP configuration for IT services companies. Also trigger for SAP-specific gap analysis, SAP scope definition, SAP best practices, CATS integration, Strangler Fig migration, Activity Type configuration, or revenue recognition patterns for T&M, fixed price, retainer, or managed services contracts. Trigger: SAP implementation, S/4HANA configuration, fit-to-standard, SAP modules, SAP localization, CATS integration, Strangler Fig, Activity Types, revenue recognition.
Software architecture design — modules, layers, boundaries, design patterns, ADRs, quality attributes, and technical debt strategy. Use when the user asks to "design the internal structure", "define module boundaries", "select architecture patterns", "document architecture decisions", "evaluate code architecture", or mentions CQRS, Hexagonal, Event Sourcing, Clean Architecture, ADRs, or technical debt.
Software and technology viability validator — deep forensic analysis of whether proposed software solutions, AI/ML components, and technology choices are viable substance or speculative smoke. Covers service viability, platform viability, methodology viability, tool viability, and vendor assessment for any service type. Use when the user asks to "validate technology viability", "detect vaporware", "verify AI claims", "assess software maturity", "check if this tech actually works", or mentions technology due diligence, software validation, AI feasibility, vendor evaluation, or tech-stack viability. This is the devoted software-specific validator — separate and more critical than the multidimensional feasibility analysis.
Complete transformation roadmap with phased execution, investment horizon, team ramp-up, risk-adjusted timeline, and estimation pivot points. Use when the user asks to "create a roadmap", "plan the transformation", "build an investment case", "team sizing", "risk-adjusted timeline", or mentions "Phase 4", "solution roadmap", "transformation plan", "phased execution", "PoC validation criteria", "kill criteria", "go/no-go gates".
End-to-end solution design — system integration, channel orchestration, identity management, observability, and cross-cutting concerns. Use when the user asks to "design the full solution", "integrate multiple systems", "plan API gateway strategy", "define identity and security architecture", "set up observability", or mentions C4 containers, BFF, Zero Trust, SLI/SLO, circuit breaker, or migration planning.
Use when the user asks to "implement Spotify model", "design squads and tribes", "organize chapters and guilds", "create autonomous team structure", "apply Spotify engineering culture", or mentions Spotify, squads, tribes, chapters, guilds, autonomous teams, matrix organization, squad health check model.
Staff augmentation discovery — talent gap analysis, skills matrix profiling, team composition modeling, onboarding and ramp-up design, retention framework, and staffing roadmap. Use when the user asks to "assess staffing needs", "analyze talent gaps", "design team composition", "plan staff augmentation", "evaluate team skills", "create staffing roadmap", "onboarding plan", "ramp-up strategy", "retention framework", or mentions talent gap, skills matrix, team topology, augmentation, nearshore, offshore, or staffing plan.
Use when the user asks to "plan staff augmentation", "source external resources", "plan contractor onboarding", "design nearshore team integration", "manage vendor staffing", or mentions staff augmentation, contractor sourcing, augmentation needs, external staffing, nearshore/offshore, resource augmentation strategy.
Stakeholder analysis — influence/interest matrix, communication plan, RACI, change readiness. Use when the user asks to "map stakeholders", "build influence matrix", "create communication plan", "assign RACI", "assess change readiness", "identify champions", or mentions stakeholder analysis, power/interest grid, engagement strategy, or adoption curve.
Use when the user asks to "identify stakeholders", "create stakeholder register", "map stakeholder power/interest", "analyze stakeholders", "design engagement strategies", or mentions stakeholder identification, power-interest matrix, influence mapping, stakeholder analysis, engagement level assessment.
Use when the user asks to "generate status report", "write weekly update", "create sprint report", "produce executive summary", "compile progress report", or mentions status report, weekly report, sprint summary, project update, progress report, RAG status update.
Use when the user asks to "run a steering committee", "prepare steering review", "conduct Go/No-Go gate", "orchestrate advisory vote", "prepare gate review package", or mentions steering committee, steering review, Go/No-Go decision, advisory vote, project gate review, steering minutes, 7-advisor evaluation.
Narrative arc design and transformation metodologia-storytelling for discovery deliverables. Use when structuring the overall narrative across deliverables, building scenario narratives, crafting transformation stories (current pain → decision → future state), or designing risk narratives and success reference stories.
Use when the user asks to "check strategic alignment", "map projects to strategy", "track OKRs", "identify strategic orphans", "verify portfolio-strategy fit", or mentions strategic alignment, strategy-to-project traceability, OKR tracking, balanced scorecard alignment, portfolio investment alignment.
Green IT evaluation, carbon footprint estimation, energy efficiency analysis, and sustainable architecture pattern recommendations. Use when the user asks to "assess sustainability", "estimate carbon footprint", "evaluate green IT", or mentions energy efficiency, sustainable architecture, or environmental impact of technology.
Use when the user asks to "track team performance", "measure velocity", "assess team health", "monitor team morale", "analyze productivity trends", or mentions team performance, velocity tracking, team health, morale, burndown, team metrics, sprint predictability.
Use when the user asks to "design team structure", "apply Team Topologies", "optimize team boundaries", "reduce cognitive load", "map team interaction modes", or mentions Team Topologies, Conway's Law, stream-aligned teams, platform teams, enabling teams, cognitive load, team interaction patterns.
Conway's Law analysis, team interaction modes, cognitive load assessment, organizational design. Use when the user asks to "design team structure", "assess cognitive load", "map team interactions", "apply Conway's Law", or mentions stream-aligned teams, platform teams, enabling teams, team-first thinking.
Technical debt quantification, debt quadrant classification (reckless/prudent x deliberate/inadvertent), remediation prioritization, and paydown roadmap generation. Use when the user asks to "assess technical debt", "quantify debt", "classify tech debt", "prioritize remediation", or mentions debt inventory, impact scoring, or paydown planning.
Technical fact-checking and multidimensional feasibility analysis — validates claims, assumptions, and technical decisions from scenario analysis against evidence. Use when the user asks to "validate feasibility", "fact-check the scenario", "verify technical claims", "run feasibility analysis", "stress-test the approach", or mentions technical due diligence, feasibility study, risk validation, or "Phase 3b" verification work.
Technical documentation precision — progressive disclosure, terminology consistency, evidence attribution, and reproducible analysis. Use when writing AS-IS analyses, functional specs, architecture documents, handover guides, or any deliverable requiring technical rigor and documentation standards.
Structured technology monitoring across analyst firms (Gartner, Forrester, IDC), academic sources (Stanford HAI, IEEE, ACM), editorial platforms (O'Reilly Radar, ThoughtWorks Tech Radar), and individual thought leaders (Martin Fowler, Paulo Caroli, Gregor Hohpe, Jez Humble). Produces vigilance reports with signals classified by urgency and impact. Use when evaluating technology trends, preparing sector-specific tech intelligence, validating technology choices against current landscape, or when "vigilancia tecnológica", "tech watch", "Gartner", "Forrester", "tech radar", "Stanford HAI", "IEEE", or "tendencias tecnológicas" is mentioned.
Test strategy design — pyramid, automation, E2E, contract testing, shift-left, test data management, QA-as-a-service strategy, test factory design, PITT methodology, QA CoE design. Use when the user asks to "design test strategy", "build test automation", "implement contract testing", "manage test data", "define quality gates", or mentions test pyramid, Pact, Playwright, Cypress, coverage targets, flaky tests, chaos engineering.
End-user advocate that evaluates deliverable clarity, cognitive load, accessibility, adoption risks, and biases. Use when the user asks to "review for clarity", "check readability", "evaluate from user perspective", "assess adoption risk", or mentions "user representative", "voice of the user", "representante del usuario", "clarity review", "cognitive load check".
UX/UI design discovery — design maturity assessment, design system inventory, user research capability evaluation, usability baseline, information architecture assessment, design process governance, and design transformation roadmap. Use when the user asks to "evaluate design maturity", "assess UX capability", "audit design system", "usability assessment", "information architecture review", "design ops evaluation", "UX transformation plan", or mentions "design discovery", "UX readiness", "design governance".
UX writing and document accessibility standards for technical deliverables. Use when the user asks to "improve readability", "fix information hierarchy", "reduce cognitive load", "write microcopy", "check readability score", or mentions "UX writing", "scanability", "Flesch-Kincaid", "escritura UX", "legibilidad", "cognitive load".
Vendor evaluation and selection framework — RFP/RFI design, scoring matrices, TCO analysis, contract risk. Use when the user asks to "evaluate vendors", "design RFP", "compare platforms", "assess TCO", or mentions vendor selection, build-vs-buy, technology evaluation, procurement strategy.
Use when the user asks to "compare vendor costs", "analyze TCO", "evaluate vendor proposals", "calculate total cost of ownership", "normalize vendor pricing", or mentions vendor comparison, total cost of ownership, vendor TCO, proposal evaluation, vendor scoring matrix, hidden cost analysis.
Use when the user asks to "manage vendors", "track vendor performance", "monitor SLAs", "evaluate supplier compliance", "create vendor scorecards", or mentions vendor management, supplier performance, SLA monitoring, contract compliance, vendor governance, vendor scorecard.
Use when the user asks to "assess waterfall maturity", "evaluate traditional PM practices", "check PMBOK adherence", "review predictive methodology readiness", "audit phase-gate compliance", or mentions waterfall assessment, traditional PM maturity, PMBOK compliance, PRINCE2 maturity, predictive PM evaluation, earned value adoption.
Use when the user asks to "implement waterfall", "plan PMBOK phases", "set up PRINCE2", "define stage gates", "design predictive lifecycle", "configure change control", or mentions waterfall, traditional PM, predictive lifecycle, stage-gate, PMBOK, PRINCE2, earned value management.
Workshop design methodology — event storming, impact mapping, user story mapping, design sprints. Replaces former workshop-facilitator (facilitation is the agent's job, design is the skill). Use when the user asks to "design a workshop", "plan event storming", "design impact mapping session", "design a sprint", "create user story map", "design discovery session", or mentions workshop design, design sprint, event storming, story mapping, or collaborative design.
Workshop design — event storming, impact mapping, user story mapping, design sprints. Use when the user asks to "plan a workshop", "run event storming", "facilitate impact mapping", "design a sprint", "create user story map", "facilitate discovery session", or mentions workshop facilitation, design sprint, event storming, story mapping, or collaborative design.
Comprehensive skill pack with 66 specialized skills for full-stack developers: 12 language experts (Python, TypeScript, Go, Rust, C++, Swift, Kotlin, C#, PHP, Java, SQL, JavaScript), 10 backend frameworks, 6 frontend/mobile, plus infrastructure, DevOps, security, and testing. Features progressive disclosure architecture for 50% faster loading.
The most comprehensive Claude Code plugin — 36 agents, 142 skills, 68 legacy command shims, and production-ready hooks for TDD, security scanning, code review, and continuous learning
Semantic search for Claude Code conversations. Remember past discussions, decisions, and patterns.
Comprehensive PR review agents specializing in comments, tests, error handling, type design, code quality, and code simplification
Upstash Context7 MCP server for up-to-date documentation lookup. Pull version-specific documentation and code examples directly from source repositories into your LLM context.
Comprehensive startup business analysis with market sizing (TAM/SAM/SOM), financial modeling, team planning, and strategic research