Use when the user asks to "use AI for project management", "augment PM with AI", "implement predictive scheduling", "parse status with NLP", or "design ML risk models". Activates when a stakeholder needs to identify AI augmentation opportunities for PM, build predictive scheduling models, automate status report parsing with NLP, design intelligent resource allocation, or create a human-AI collaboration model for project governance.
Enables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
TL;DR: Identifies and designs AI augmentation opportunities across PM practices: predictive scheduling using historical velocity/EVM data, risk materialization prediction via ML pattern matching, NLP-based status report parsing for automated health scoring, and intelligent resource allocation recommendations. Produces a human-AI collaboration model where AI handles pattern recognition and data synthesis while humans retain judgment on stakeholder decisions.
Principio Rector
La IA no reemplaza al PM — amplifica sus capacidades donde los datos superan la intuición. Un PM con IA predice desvíos de cronograma 3 sprints antes de que sean visibles; sin IA, los detecta cuando ya son crisis. Pero la IA nunca negocia con un stakeholder, nunca gestiona un conflicto de equipo, nunca toma una decisión ética. La línea entre amplificación y delegación ciega es la línea entre éxito y desastre.
Assumptions & Limits
Assumes ≥5 historical projects or ≥10 sprints of data for meaningful AI predictions [SUPUESTO]
Assumes PM tools have API access for data extraction [SUPUESTO]
Breaks when historical data is sparse, inconsistent, or non-existent — AI predictions require volume
Does not implement ML models — designs specifications for engineering teams to build
Scope limited to PM-domain AI; does not cover product AI or engineering AI use cases
AI recommendations always require human validation before action [PLAN]
Usage
# Identify AI augmentation opportunities for current PM practices
/pm:ai-pm-assistant $PROJECT --type=opportunity-scan
# Design predictive scheduling model
/pm:ai-pm-assistant $PROJECT --type=predictive-schedule --data-source="jira"
# Design NLP status parsing for automated health scoring
/pm:ai-pm-assistant $PROJECT --type=nlp-parsing --input="status-reports"
Recovery: Historical failure pattern matching, recovery plan probability assessment
Before Designing
Read the current PM process inventory to understand data collection practices and tool ecosystem
Globskills/ai-pm-assistant/references/*.md for AI-PM integration patterns and case studies
Read historical project data exports to assess data quality and volume for ML feasibility
Grep for existing automation or reporting scripts that could serve as AI integration points
Entrada (Input Requirements)
Current PM processes and data collection practices
Available historical project data (minimum 5 projects or 10 sprints)
PM tool APIs and data export capabilities
Team AI literacy level and organizational AI policies
Specific PM pain points where data-driven insights would help
Proceso (Protocol)
Pain point inventory — Catalog PM activities where data outpaces human processing capacity
Data readiness assessment — Evaluate historical data quality, volume, and accessibility
AI capability mapping — Match PM pain points to AI techniques (prediction, classification, NLP, optimization)
Use case prioritization — Rank AI use cases by PM impact and implementation feasibility
Human-AI boundary design — Define where AI recommends vs. where humans decide (RACI for AI)
Predictive model design — Design schedule/cost/risk prediction models with confidence intervals
NLP parsing specification — Design NLP rules for status report health scoring and sentiment analysis
Integration architecture — Define how AI outputs feed into PM dashboards and workflows
Validation protocol — Design A/B testing to measure AI recommendation accuracy vs. human-only
Adoption roadmap — Phase AI adoption from single-team pilot to portfolio-wide deployment
Edge Cases
Insufficient historical data (<5 projects): Do not design predictive models. Instead, design data collection framework first and schedule AI feasibility reassessment after ≥10 sprints of clean data. [METRIC]
Organization has AI policy restrictions: Map all AI use cases against organizational AI policy. Flag prohibited uses. Design compliant alternatives (rules-based instead of ML where required). [PLAN]
Team distrusts AI recommendations: Design transparency layer showing AI reasoning. Start with low-stakes use cases (meeting scheduling, metric calculation) before high-stakes (risk prediction). [STAKEHOLDER]
PM tools have no APIs: Design manual data pipeline with scheduled exports. Document API requirements for tool selection criteria in next procurement cycle. [SUPUESTO]
Example: Good vs Bad
Good AI-PM Design:
Attribute
Value
Use cases identified
8, ranked by ROI
Data readiness
Assessed per use case with gap analysis
Human-AI boundaries
RACI matrix for AI vs human decisions
Top 3 use cases
Fully specified with input/output/confidence
Validation protocol
A/B test design with success criteria
Adoption roadmap
3 phases over 6 months, pilot-first
Bad AI-PM Design:
A document that lists "use AI for everything" without data readiness assessment, no human-AI boundary definition, and claims AI will "predict project failure with 99% accuracy." Fails because it overpromises AI capability without validating data availability, creates unrealistic expectations, and omits the critical human judgment layer.
Validation Gate
≥5 AI use cases identified with quantified PM impact (hours saved or accuracy gained)
Every use case has data readiness assessment (volume, quality, accessibility)