From rkit
Structured engineering retrospective after PDCA Report completion. Triggers: retro, retrospective, 회고, 振り返り, 回顾, retrospectiva, rétrospective, Retrospektive, retrospettiva
npx claudepluginhub solitasroh/rkit --plugin rkitThis skill is limited to using the following tools:
Before running a retrospective, verify:
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Before running a retrospective, verify:
docs/04-report/features/{feature}.report.md# Verify report exists
ls docs/04-report/features/{feature}.report.md
If the report does not exist, inform the user:
"No PDCA Report found for '{feature}'. Run /pdca report {feature} first."
Extract from the Report document:
| Metric | Source |
|---|---|
| Match Rate | Report "Match Rate" or "일치율" |
| Iteration Count | Number of /pdca iterate cycles |
| Duration | Plan date to Report date |
| Domain | MCU / MPU / WPF |
| Platform | stm32, imx6, wpf, etc. |
Also collect from .rkit/state/:
pdca-status.json -- PDCA phase transition historybenchmark-history.json -- Resource usage trend during featureEvaluate and document successes across these dimensions:
| Dimension | Guiding Questions |
|---|---|
| AI Assistance | Did domain skills provide accurate guidance? |
| Domain Skill Usage | Which skills were invoked? Were they sufficient? |
| Automation Level | How much was automated vs manual intervention? |
| PDCA Adherence | Did the team follow Plan -> Design -> Do -> Check -> Act? |
| First-time Quality | Was Match Rate > 90% on first Check? |
| Build/Resource Budget | Did benchmarks stay within thresholds throughout? |
Identify friction points:
| Dimension | Guiding Questions |
|---|---|
| Iteration Count | If > 2 iterations, what caused rework? |
| Context Loss | Were there points where AI lacked project context? |
| Missing Skills | Was there a domain need not covered by existing skills? |
| Tool Gaps | Were any manual steps that should be automated? |
| Documentation Drift | Did Design docs stay in sync with implementation? |
| Review Bottlenecks | Were there delays in code review or verification? |
Generate concrete, actionable improvements:
### Action Items
- [ ] **{Category}**: {Specific action} — Owner: {person/team}, Due: {date}
Categories:
.rkit/ configurationSave structured learnings to .rkit/state/learnings.json:
{
"learnings": [
{
"id": "L-{NNN}",
"date": "{timestamp}",
"feature": "{feature}",
"domain": "{mcu|mpu|wpf}",
"category": "{skill|config|workflow|tooling|knowledge}",
"summary": "{One-line summary}",
"detail": "{Detailed lesson}",
"action": "{What to do differently next time}",
"applied": false
}
]
}
If the file already exists, append to the learnings array. Assign the next sequential ID.
Generate docs/04-report/{feature}.retro.md:
# Retrospective: {feature}
**Date**: {date}
**Domain**: {domain}
**Platform**: {platform}
**Duration**: {start_date} to {end_date} ({N} days)
## PDCA Metrics
| Metric | Value |
|-----------------|--------|
| Match Rate | {N}% |
| Iterations | {N} |
| Duration | {N}d |
| Skills Used | {list} |
## What Went Well
- {item 1}
- {item 2}
- ...
## What Could Improve
- {item 1}
- {item 2}
- ...
## Action Items
- [ ] {action 1}
- [ ] {action 2}
- ...
## Lessons Learned
| ID | Category | Summary |
|-------|-----------|----------------------------|
| L-001 | {cat} | {summary} |
---
Generated by rkit retro skill
After /pdca report {feature} completes, suggest:
"Report generated. Run retro {feature} to capture lessons learned."
When starting a new feature with /pdca plan, check .rkit/state/learnings.json for:
applied: falseWhen multiple retrospectives exist, analyze trends:
applied: true when done)| Domain | Retro Focus Areas |
|---|---|
| MCU | Flash/RAM budget adherence, MISRA compliance, HardFault count |
| MPU | DTS validation pass rate, build time, rootfs size control |
| WPF | Binding error count, MVVM compliance, publish size |
Each domain adds domain-specific metrics to the retrospective automatically based on project detection.