Use when the user asks to "run a project discovery retrospective", "review discovery outcomes", "assess discovery effectiveness", "calibrate pipeline parameters", or "measure discovery quality". [EXPLICIT] Activates when a stakeholder needs to conduct a quantitative post-discovery review, measure pipeline execution quality, assess deliverable completeness, evaluate estimation accuracy, or update APEX pipeline parameters based on retrospective findings. [EXPLICIT]
From jm-adknpx claudepluginhub javimontano/jm-adk-alfaThis skill is limited to using the following tools:
agents/guardian.mdagents/lead.mdagents/specialist.mdagents/support.mdevals/evals.jsonknowledge/body-of-knowledge.mdknowledge/knowledge-graph.mdprompts/meta.mdprompts/primary.mdprompts/variations/deep.mdprompts/variations/quick.mdreferences/body-of-knowledge.mdreferences/knowledge-graph.mmdreferences/state-of-the-art.mdtemplates/output.docx.mdtemplates/output.htmlTL;DR: Conducts a quantitative post-discovery retrospective analyzing pipeline execution quality, deliverable completeness, stakeholder satisfaction, estimation accuracy, and methodology fit. Produces measurable insights that improve future discovery cycles and calibrate APEX pipeline parameters.
La retrospectiva de discovery no es un ejercicio de catarsis — es un instrumento de calibración. Cada ciclo de discovery debe producir datos que mejoren el siguiente: cuánto tardamos, cuánto acertamos, qué supuestos se validaron. Sin métricas, la mejora es ilusión. [EXPLICIT]
# Run full discovery retrospective
/pm:discovery-retrospective $PROJECT --type=full
# Assess deliverable quality only
/pm:discovery-retrospective $PROJECT --type=quality-audit
# Update pipeline calibration parameters
/pm:discovery-retrospective $PROJECT --type=calibrate --update="true"
Parameters:
| Parameter | Required | Description |
|---|---|---|
$PROJECT | Yes | Project identifier |
--type | Yes | full, quality-audit, calibrate, timeline-analysis |
--update | No | Apply calibration updates (true/false) |
{TIPO_PROYECTO} variants:
skills/discovery-retrospective/references/*.md for retrospective scoring rubrics[SUPUESTO] tags across deliverables to assess assumption validation ratesGood Discovery Retrospective:
| Attribute | Value |
|---|---|
| Deliverables scored | 100% against quality checklists |
| Timeline analysis | Planned vs actual per phase with variance |
| Assumption validation | X validated, Y invalidated, Z still open |
| Stakeholder satisfaction | Quantified score with driver analysis |
| Improvement actions | ≥5 specific, measurable, assigned |
| Calibration updates | Pipeline parameters adjusted with rationale |
Bad Discovery Retrospective: A meeting where everyone says "it went well" without metrics, scoring, or timeline analysis. No assumption validation. No calibration updates. Fails because feel-good retrospectives produce no data for improvement — the next discovery cycle will repeat the same patterns. [EXPLICIT]
| Resource | When to read | Location |
|---|---|---|
| Body of Knowledge | Before retrospecting to understand assessment rubrics | references/body-of-knowledge.md |
| State of the Art | When evaluating quantitative retro approaches | references/state-of-the-art.md |
| Knowledge Graph | To link retro to pipeline orchestration | references/knowledge-graph.mmd |
| Use Case Prompts | When facilitating retrospective sessions | prompts/use-case-prompts.md |
| Metaprompts | To generate scoring rubric templates | prompts/metaprompts.md |
| Sample Output | To calibrate expected retrospective report format | examples/sample-output.md |
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.