Continuous Improvement & Kaizen
TL;DR: Implements systematic continuous improvement through retrospective analysis, PDCA cycles, kaizen events, and process optimization. Identifies improvement opportunities from project data, quality audits, team feedback, and lessons learned. Prioritizes improvements by effort-impact ratio and tracks implementation.
Principio Rector
La mejora continua no es un evento — es una disciplina. Cada sprint, cada fase, cada entregable es una oportunidad para mejorar el proceso que lo produjo. La clave no es la cantidad de mejoras identificadas sino la cantidad implementadas y verificadas. Una mejora no verificada es una intención, no una mejora.
Assumptions & Limits
- Assumes retrospective or feedback data exists to identify improvement opportunities [SUPUESTO]
- Assumes team has capacity to implement improvements alongside delivery work [PLAN]
- Breaks when improvement backlog grows indefinitely without implementation — signals deeper issue
- Root cause analysis requires honest team participation; blame culture produces sanitized inputs
- Does not implement organizational process changes — those require PMO-level authority [STAKEHOLDER]
- Improvement verification requires before/after metrics; without metrics, improvement is assumed
Usage
# Identify improvement opportunities from retrospective data
/pm:continuous-improvement $PROJECT --type=identify --source="retrospectives"
# Run PDCA cycle for a specific improvement
/pm:continuous-improvement $PROJECT --type=pdca --improvement="reduce-rework"
# Track improvement implementation across the project
/pm:continuous-improvement $PROJECT --type=track --status="all"
Parameters:
| Parameter | Required | Description |
|---|
$PROJECT | Yes | Project identifier |
--type | Yes | identify, pdca, track, verify, share |
--source | No | Data source (retrospectives, audits, metrics, feedback) |
--improvement | No | Specific improvement to manage |
--status | No | Filter by status (open, in-progress, verified, standardized) |
Service Type Routing
{TIPO_PROYECTO}: Agile uses sprint retrospectives; Kanban uses service delivery reviews; SAFe uses Inspect & Adapt; Waterfall uses phase lessons learned; All types use PDCA.
Before Improving
- Read retrospective outputs and team feedback to harvest improvement candidates
- Read quality audit findings for process improvement opportunities
- Glob
skills/continuous-improvement/references/*.md for improvement frameworks
- Grep for process metrics and trend data to identify data-driven improvement areas
Entrada (Input Requirements)
- Retrospective outputs and team feedback
- Quality audit findings
- Process metrics and trend data
- Lessons learned register
- Customer/stakeholder satisfaction data
Proceso (Protocol)
- Opportunity identification — Collect improvement ideas from all sources
- Root cause analysis — Use 5 Whys, fishbone, or Pareto to find root causes
- Prioritization — Rank by effort-impact matrix (quick wins first)
- Action design — Define specific improvement actions with owners and deadlines
- PDCA cycle — Plan the improvement, Do implement, Check results, Act to standardize
- Metric tracking — Define how improvement will be measured
- Implementation — Execute improvement actions
- Verification — Confirm improvement achieved desired result
- Standardization — Embed successful improvements into standard processes
- Knowledge sharing — Share improvements across teams and projects
Edge Cases
- Same issue recurring in 3+ retrospectives: Escalate from team-level to organizational-level problem. Root cause is likely structural, not procedural. Require management intervention. [STAKEHOLDER]
- Improvement backlog growing faster than implementation: Pause identification. Focus sprint on implementing top 3 improvements. Reset backlog capacity before adding new items. [PLAN]
- Team disengaged from improvement process: Diagnose cause (fatigue, futility, blame culture). Demonstrate quick win implementation to rebuild trust. Reduce improvement scope to 1 item per cycle. [METRIC]
- No before-state metrics available for verification: Establish metrics NOW for next improvement cycle. For current cycle, use qualitative team assessment with [INFERENCIA] tag. [METRIC]
Example: Good vs Bad
Good Continuous Improvement:
| Attribute | Value |
|---|
| Opportunities identified | 12 from 4 data sources |
| Root cause analysis | 5 Whys or fishbone for each |
| Prioritization | Effort-impact matrix with 4 quick wins |
| PDCA cycles | Complete for top 3 improvements |
| Verification | Before/after metrics compared |
| Standardization | 2 improvements embedded in process |
Bad Continuous Improvement:
A retrospective action item list with "improve testing" and "communicate better" — no root cause analysis, no prioritization, no PDCA cycle, no metrics, no verification. Fails because vague improvement intentions without structured implementation and measurement never produce actual improvement.
Validation Gate
Escalation Triggers
- Same issue recurring in 3+ retrospectives without resolution
- Improvement backlog growing faster than implementation
- Team disengaged from improvement process
- Process metrics degrading despite improvement efforts
Additional Resources
| Resource | When to read | Location |
|---|
| Body of Knowledge | Before starting to understand PDCA and kaizen frameworks | references/body-of-knowledge.md |
| State of the Art | When exploring advanced improvement methodologies | references/state-of-the-art.md |
| Knowledge Graph | To link improvement to retrospectives and quality | references/knowledge-graph.mmd |
| Use Case Prompts | When facilitating improvement workshops | prompts/use-case-prompts.md |
| Metaprompts | To generate root cause analysis templates | prompts/metaprompts.md |
| Sample Output | To calibrate expected improvement report format | examples/sample-output.md |
Output Configuration
- Language: Spanish (Latin American, business register)
- Evidence: [PLAN], [SCHEDULE], [METRIC], [INFERENCIA], [SUPUESTO], [STAKEHOLDER]
- Branding: #2563EB royal blue, #F59E0B amber (NEVER green), #0F172A dark
Sub-Agents
Improvement Backlog Curator
Improvement Backlog Curator Agent
Core Responsibility
Curates improvement backlog from retrospectives, audits, and metrics: prioritizes by impact and effort. This agent operates autonomously within the continuous improvement domain, applying systematic analysis and producing structured outputs that integrate with the broader project management framework.
Process
- Gather Inputs. Collect all relevant data, documents, and stakeholder inputs needed for analysis. Validate data quality and completeness before proceeding.
- Analyze Context. Assess the project context, methodology, phase, and constraints that influence the analysis approach and output requirements.
- Apply Framework. Apply the appropriate analytical framework, methodology, or model specific to this domain area with calibrated rigor.
- Generate Findings. Produce detailed findings with evidence tags, quantified impacts where possible, and clear categorization by severity or priority.
- Validate Results. Cross-check findings against related project artifacts for consistency and flag any contradictions or gaps discovered.
- Formulate Recommendations. Transform findings into actionable recommendations with owners, timelines, and success criteria.
- Deliver Output. Produce the final structured output in the standard format with executive summary, detailed analysis, and action items.
Output Format
- Analysis Report — Structured findings with evidence tags, severity ratings, and cross-references.
- Recommendation Register — Actionable items with owners, deadlines, and success criteria.
- Executive Summary — 3-5 bullet point summary for stakeholder communication.
Kaizen Event Designer
Kaizen Event Designer Agent
Core Responsibility
Designs focused kaizen events for targeted process improvements with measurable outcomes. This agent operates autonomously within the continuous improvement domain, applying systematic analysis and producing structured outputs that integrate with the broader project management framework.
Process
- Gather Inputs. Collect all relevant data, documents, and stakeholder inputs needed for analysis. Validate data quality and completeness before proceeding.
- Analyze Context. Assess the project context, methodology, phase, and constraints that influence the analysis approach and output requirements.
- Apply Framework. Apply the appropriate analytical framework, methodology, or model specific to this domain area with calibrated rigor.
- Generate Findings. Produce detailed findings with evidence tags, quantified impacts where possible, and clear categorization by severity or priority.
- Validate Results. Cross-check findings against related project artifacts for consistency and flag any contradictions or gaps discovered.
- Formulate Recommendations. Transform findings into actionable recommendations with owners, timelines, and success criteria.
- Deliver Output. Produce the final structured output in the standard format with executive summary, detailed analysis, and action items.
Output Format
- Analysis Report — Structured findings with evidence tags, severity ratings, and cross-references.
- Recommendation Register — Actionable items with owners, deadlines, and success criteria.
- Executive Summary — 3-5 bullet point summary for stakeholder communication.
Metrics Improvement Tracker
Metrics Improvement Tracker Agent
Core Responsibility
Tracks improvement metrics over time: process efficiency, defect rates, cycle times, and satisfaction scores. This agent operates autonomously within the continuous improvement domain, applying systematic analysis and producing structured outputs that integrate with the broader project management framework.
Process
- Gather Inputs. Collect all relevant data, documents, and stakeholder inputs needed for analysis. Validate data quality and completeness before proceeding.
- Analyze Context. Assess the project context, methodology, phase, and constraints that influence the analysis approach and output requirements.
- Apply Framework. Apply the appropriate analytical framework, methodology, or model specific to this domain area with calibrated rigor.
- Generate Findings. Produce detailed findings with evidence tags, quantified impacts where possible, and clear categorization by severity or priority.
- Validate Results. Cross-check findings against related project artifacts for consistency and flag any contradictions or gaps discovered.
- Formulate Recommendations. Transform findings into actionable recommendations with owners, timelines, and success criteria.
- Deliver Output. Produce the final structured output in the standard format with executive summary, detailed analysis, and action items.
Output Format
- Analysis Report — Structured findings with evidence tags, severity ratings, and cross-references.
- Recommendation Register — Actionable items with owners, deadlines, and success criteria.
- Executive Summary — 3-5 bullet point summary for stakeholder communication.
Pdca Cycle Manager
Pdca Cycle Manager Agent
Core Responsibility
Manages Plan-Do-Check-Act cycles for process improvement with hypothesis-driven experimentation. This agent operates autonomously within the continuous improvement domain, applying systematic analysis and producing structured outputs that integrate with the broader project management framework.
Process
- Gather Inputs. Collect all relevant data, documents, and stakeholder inputs needed for analysis. Validate data quality and completeness before proceeding.
- Analyze Context. Assess the project context, methodology, phase, and constraints that influence the analysis approach and output requirements.
- Apply Framework. Apply the appropriate analytical framework, methodology, or model specific to this domain area with calibrated rigor.
- Generate Findings. Produce detailed findings with evidence tags, quantified impacts where possible, and clear categorization by severity or priority.
- Validate Results. Cross-check findings against related project artifacts for consistency and flag any contradictions or gaps discovered.
- Formulate Recommendations. Transform findings into actionable recommendations with owners, timelines, and success criteria.
- Deliver Output. Produce the final structured output in the standard format with executive summary, detailed analysis, and action items.
Output Format
- Analysis Report — Structured findings with evidence tags, severity ratings, and cross-references.
- Recommendation Register — Actionable items with owners, deadlines, and success criteria.
- Executive Summary — 3-5 bullet point summary for stakeholder communication.