Discovery Retrospective
TL;DR: Conducts a quantitative post-discovery retrospective analyzing pipeline execution quality, deliverable completeness, stakeholder satisfaction, estimation accuracy, and methodology fit. Produces measurable insights that improve future discovery cycles and calibrate APEX pipeline parameters.
Principio Rector
La retrospectiva de discovery no es un ejercicio de catarsis — es un instrumento de calibración. Cada ciclo de discovery debe producir datos que mejoren el siguiente: cuánto tardamos, cuánto acertamos, qué supuestos se validaron. Sin métricas, la mejora es ilusión.
Assumptions & Limits
- Assumes discovery pipeline has been executed with deliverables produced [PLAN]
- Assumes session changelog and decision log are available for timeline analysis [SUPUESTO]
- Breaks when no session data was captured — retrospective becomes opinion-based, not data-driven
- Estimation accuracy measurement requires post-discovery actuals, which may not yet exist
- Does not replace sprint retrospectives — this is pipeline-level, not ceremony-level
- Calibration updates require organizational authority to modify pipeline parameters [STAKEHOLDER]
Usage
# Run full discovery retrospective
/pm:discovery-retrospective $PROJECT --type=full
# Assess deliverable quality only
/pm:discovery-retrospective $PROJECT --type=quality-audit
# Update pipeline calibration parameters
/pm:discovery-retrospective $PROJECT --type=calibrate --update="true"
Parameters:
| Parameter | Required | Description |
|---|
$PROJECT | Yes | Project identifier |
--type | Yes | full, quality-audit, calibrate, timeline-analysis |
--update | No | Apply calibration updates (true/false) |
Service Type Routing
{TIPO_PROYECTO} variants:
- Agile: Focus on sprint-0/inception quality, backlog readiness, DoR compliance
- Waterfall: Focus on requirements completeness, estimation accuracy, phase-gate quality
- SAFe: Focus on PI planning readiness, ART alignment, architectural runway assessment
- Kanban: Focus on flow establishment quality, WIP calibration, service class definition
- Hybrid: Assess methodology selection decision quality and integration effectiveness
- PMO: Aggregate discovery metrics across portfolio for organizational learning
- Recovery: Assess what the original discovery missed that led to project distress
Before Retrospecting
- Read all discovery deliverables produced during the pipeline for completeness assessment
- Read the session changelog to reconstruct timeline and decision history
- Glob
skills/discovery-retrospective/references/*.md for retrospective scoring rubrics
- Grep for
[SUPUESTO] tags across deliverables to assess assumption validation rates
Entrada (Input Requirements)
- All discovery deliverables produced during the pipeline
- Session changelog and decision log
- Time spent per phase and deliverable
- Stakeholder feedback (formal and informal)
- Original discovery scope vs. actual coverage
Proceso (Protocol)
- Deliverable inventory — Catalog all outputs produced during discovery
- Completeness scoring — Rate each deliverable against quality checklist
- Timeline analysis — Compare planned vs. actual time per phase
- Estimation accuracy — Measure how discovery estimates tracked against actuals (if available)
- Stakeholder satisfaction — Collect and quantify stakeholder feedback
- Assumption validation — Review which [SUPUESTO] tags were validated vs. invalidated
- Pipeline friction — Identify bottlenecks, rework cycles, and blocked phases
- Methodology fit — Assess whether selected methodology proved appropriate
- Improvement actions — Define specific, measurable improvements for next discovery
- Calibration update — Update APEX pipeline parameters based on findings
Edge Cases
- No session data captured during discovery: Use deliverable timestamps and git history as proxy. Tag all timeline findings as [INFERENCIA]. Recommend session tracking for future cycles. [SUPUESTO]
- Stakeholder satisfaction below threshold: Deep-dive into dissatisfaction drivers. Separate content quality issues from communication/expectation issues. Design targeted improvements. [STAKEHOLDER]
- Critical deliverables missing or incomplete: Flag as Critical finding. Root-cause analysis: was it scope creep, capacity issue, or skill gap? Design preventive measure. [PLAN]
- Estimation variance >50%: Analyze whether variance is from scope change, effort underestimation, or complexity underestimation. Calibrate estimation parameters accordingly. [METRIC]
Example: Good vs Bad
Good Discovery Retrospective:
| Attribute | Value |
|---|
| Deliverables scored | 100% against quality checklists |
| Timeline analysis | Planned vs actual per phase with variance |
| Assumption validation | X validated, Y invalidated, Z still open |
| Stakeholder satisfaction | Quantified score with driver analysis |
| Improvement actions | ≥5 specific, measurable, assigned |
| Calibration updates | Pipeline parameters adjusted with rationale |
Bad Discovery Retrospective:
A meeting where everyone says "it went well" without metrics, scoring, or timeline analysis. No assumption validation. No calibration updates. Fails because feel-good retrospectives produce no data for improvement — the next discovery cycle will repeat the same patterns.
Validation Gate
Escalation Triggers
- Discovery quality score below 60% across multiple dimensions
- Stakeholder satisfaction below acceptable threshold
- Critical deliverables missing or fundamentally incomplete
- Estimation variance exceeding 50% on key dimensions
Additional Resources
| Resource | When to read | Location |
|---|
| Body of Knowledge | Before retrospecting to understand assessment rubrics | references/body-of-knowledge.md |
| State of the Art | When evaluating quantitative retro approaches | references/state-of-the-art.md |
| Knowledge Graph | To link retro to pipeline orchestration | references/knowledge-graph.mmd |
| Use Case Prompts | When facilitating retrospective sessions | prompts/use-case-prompts.md |
| Metaprompts | To generate scoring rubric templates | prompts/metaprompts.md |
| Sample Output | To calibrate expected retrospective report format | examples/sample-output.md |
Output Configuration
- Language: Spanish (Latin American, business register)
- Evidence: [PLAN], [SCHEDULE], [METRIC], [INFERENCIA], [SUPUESTO], [STAKEHOLDER]
- Branding: #2563EB royal blue, #F59E0B amber (NEVER green), #0F172A dark
Sub-Agents
Engagement Reflector
Engagement Reflector Agent
Core Responsibility
Reflects on the overall engagement: what worked, what didn't, and what surprised us. This agent operates autonomously within the discovery retrospective domain, applying systematic analysis and producing structured outputs that integrate with the broader project management framework.
Process
- Gather Inputs. Collect all relevant data, documents, and stakeholder inputs needed for analysis. Validate data quality and completeness before proceeding.
- Analyze Context. Assess the project context, methodology, phase, and constraints that influence the analysis approach and output requirements.
- Apply Framework. Apply the appropriate analytical framework, methodology, or model specific to this domain area with calibrated rigor.
- Generate Findings. Produce detailed findings with evidence tags, quantified impacts where possible, and clear categorization by severity or priority.
- Validate Results. Cross-check findings against related project artifacts for consistency and flag any contradictions or gaps discovered.
- Formulate Recommendations. Transform findings into actionable recommendations with owners, timelines, and success criteria.
- Deliver Output. Produce the final structured output in the standard format with executive summary, detailed analysis, and action items.
Output Format
- Analysis Report — Structured findings with evidence tags, severity ratings, and cross-references.
- Recommendation Register — Actionable items with owners, deadlines, and success criteria.
- Executive Summary — 3-5 bullet point summary for stakeholder communication.
Lessons Synthesizer
Lessons Synthesizer Agent
Core Responsibility
Synthesizes lessons into actionable recommendations for future engagements and organizational learning. This agent operates autonomously within the discovery retrospective domain, applying systematic analysis and producing structured outputs that integrate with the broader project management framework.
Process
- Gather Inputs. Collect all relevant data, documents, and stakeholder inputs needed for analysis. Validate data quality and completeness before proceeding.
- Analyze Context. Assess the project context, methodology, phase, and constraints that influence the analysis approach and output requirements.
- Apply Framework. Apply the appropriate analytical framework, methodology, or model specific to this domain area with calibrated rigor.
- Generate Findings. Produce detailed findings with evidence tags, quantified impacts where possible, and clear categorization by severity or priority.
- Validate Results. Cross-check findings against related project artifacts for consistency and flag any contradictions or gaps discovered.
- Formulate Recommendations. Transform findings into actionable recommendations with owners, timelines, and success criteria.
- Deliver Output. Produce the final structured output in the standard format with executive summary, detailed analysis, and action items.
Output Format
- Analysis Report — Structured findings with evidence tags, severity ratings, and cross-references.
- Recommendation Register — Actionable items with owners, deadlines, and success criteria.
- Executive Summary — 3-5 bullet point summary for stakeholder communication.
Methodology Effectiveness Assessor
Methodology Effectiveness Assessor Agent
Core Responsibility
Assesses effectiveness of the methodology used: ceremony value, artifact usefulness, process adherence. This agent operates autonomously within the discovery retrospective domain, applying systematic analysis and producing structured outputs that integrate with the broader project management framework.
Process
- Gather Inputs. Collect all relevant data, documents, and stakeholder inputs needed for analysis. Validate data quality and completeness before proceeding.
- Analyze Context. Assess the project context, methodology, phase, and constraints that influence the analysis approach and output requirements.
- Apply Framework. Apply the appropriate analytical framework, methodology, or model specific to this domain area with calibrated rigor.
- Generate Findings. Produce detailed findings with evidence tags, quantified impacts where possible, and clear categorization by severity or priority.
- Validate Results. Cross-check findings against related project artifacts for consistency and flag any contradictions or gaps discovered.
- Formulate Recommendations. Transform findings into actionable recommendations with owners, timelines, and success criteria.
- Deliver Output. Produce the final structured output in the standard format with executive summary, detailed analysis, and action items.
Output Format
- Analysis Report — Structured findings with evidence tags, severity ratings, and cross-references.
- Recommendation Register — Actionable items with owners, deadlines, and success criteria.
- Executive Summary — 3-5 bullet point summary for stakeholder communication.
Team Dynamics Evaluator
Team Dynamics Evaluator Agent
Core Responsibility
Evaluates team dynamics during the engagement: collaboration, communication, decision-making effectiveness. This agent operates autonomously within the discovery retrospective domain, applying systematic analysis and producing structured outputs that integrate with the broader project management framework.
Process
- Gather Inputs. Collect all relevant data, documents, and stakeholder inputs needed for analysis. Validate data quality and completeness before proceeding.
- Analyze Context. Assess the project context, methodology, phase, and constraints that influence the analysis approach and output requirements.
- Apply Framework. Apply the appropriate analytical framework, methodology, or model specific to this domain area with calibrated rigor.
- Generate Findings. Produce detailed findings with evidence tags, quantified impacts where possible, and clear categorization by severity or priority.
- Validate Results. Cross-check findings against related project artifacts for consistency and flag any contradictions or gaps discovered.
- Formulate Recommendations. Transform findings into actionable recommendations with owners, timelines, and success criteria.
- Deliver Output. Produce the final structured output in the standard format with executive summary, detailed analysis, and action items.
Output Format
- Analysis Report — Structured findings with evidence tags, severity ratings, and cross-references.
- Recommendation Register — Actionable items with owners, deadlines, and success criteria.
- Executive Summary — 3-5 bullet point summary for stakeholder communication.