Use when capturing learnings and updating docs after review, when user says "reflect", "retrospective", "update docs", or after /harness:review completes
From harnessnpx claudepluginhub donovan-yohan/belayer --plugin harness/reflectAnalyzes and validates task adherence, session progress, and completion using Serena MCP reflection tools. Provides recommendations and updates session metadata. Supports --type and analysis flags.
/reflectScans session history for corrections, extracts learnings, and updates CLAUDE.md files across project, global, and subdirectory tiers with human review. Supports dry-run, dedupe, organize, and scan flags.
/reflectAnalyzes filtered diary entries for patterns in user preferences, code decisions, solutions, and challenges, then synthesizes insights and proposes CLAUDE.md updates.
/reflectAnalyzes current session, detects context, extracts summary and insights via subagents, guides selection, and integrates into User/Project memory.
/reflectAnalyzes chat history, Claude instructions, commands, and config to suggest interactive improvements and implements approved changes, producing updated instructions.
/reflectGuides reflection on recent work like bug fixes or features to extract root causes, patterns, decisions. Records learnings via tk learn and tk decide.
Full documentation reconciliation, conversation mining, and retrospective. Run after /harness:review, before /harness:complete.
/harness:reflect # Diff from active plan creation or last 5 commits
/harness:reflect HEAD~3 # Diff last 3 commits
IMMEDIATELY execute this workflow:
Determine the diff scope:
git diff {ref}...HEADdocs/exec-plans/active/. If one exists, diff from its creation date.git diff HEAD~5...HEADBuild a change profile from the diff:
git diff {ref}...HEAD --stat
git diff {ref}...HEAD --name-only
Extract:
Read the Documentation Map from CLAUDE.md (just the table — not the whole file).
Match the change profile against Documentation Map categories:
docs/ARCHITECTURE.mddocs/DESIGN.mddocs/{DOMAIN}.mdRead ONLY the matched Tier 2 files. Scan their "Deep Docs" tables. If any Tier 3 files are relevant to the diff, read those too (on-demand).
If docs/adrs/ exists, list ADR filenames and read title lines. Load any ADRs whose topic overlaps with the change profile.
For each loaded doc, check if the diff contradicts or obsoletes anything:
Deleted-code audit: Extract deleted files/directories from the diff (--diff-filter=D). For each:
For each stale finding, fix it in-place:
docs/TUI.md), delete the file or replace contents with a tombstone pointing to what replaced itIf an active plan exists in docs/exec-plans/active/:
Scan the current conversation history for:
Categorize each finding as:
Codify findings:
docs/adrs/ exists, create a new ADR with status Proposed. If not, inform user and suggest /adr:init./adr:update inline.14.5. Write learnings from conversation mining:
docs/LEARNINGS.mddocs/LEARNINGS.md doesn't exist, create it with the scaffold:
# Learnings
Persistent learnings captured across sessions. Append-only, merge-friendly.
Status: `active` | `superseded`
Categories: `architecture` | `testing` | `patterns` | `workflow` | `debugging` | `performance` | `review-escape`
---
YYYYMMDD) and a short kebab-case slug from the learning topic. Format: L-YYYYMMDD-slug. See references/learnings-format.md § "ID Format" for details.
### L-{YYYYMMDD}-{slug}: {one-line insight}
- status: active
- category: {matching domain: architecture|testing|patterns|workflow|debugging|performance|review-escape}
- source: /harness:reflect {YYYY-MM-DD}
- branch: {current git branch}
{Actionable recommendation for future sessions.}
---
This phase feeds the self-learning review system. When bugs escape /harness:review and are caught by external reviewers (copilot, PR reviewers, QA, production incidents), those escapes become new adversarial questions that improve future reviews.
14.6. Detect review escapes — scan the conversation history and available context for bugs that were NOT caught by /harness:review but were found by:
Indicators of an escape:
/harness:review passed/harness:review report shows all agents PASS but issues were later found14.7. Categorize each escape into the adversarial question bank taxonomy:
concurrency — race conditions, thundering herds, lock contentiondistributed — double-execution, coordination failures, split-brainfailure-modes — missing retries, cascade failures, no circuit breakerresource-exhaustion — memory leaks, unbounded growth, connection exhaustiondata-integrity — partial writes, missing transactions, duplicate processingsecurity — injection, auth bypass, data exposurelogic — incorrect behavior, wrong edge case handling (code-level bugs the review agents should have caught)14.8. Formulate "what breaks?" questions for each escape:
14.9. Update docs/REVIEW_GUIDANCE.md if it exists:
a. Append each escape to the Escape Log table:
markdown | {YYYY-MM-DD} | {one-line bug description} | {copilot|reviewer|QA|production|self} | {category} | {question text} |
b. Add the new question to the appropriate Adversarial Question Bank category:
- If the category exists, append the question under it
- If the category doesn't exist, create a new ### {Category} section
- If a similar question already exists, refine it to be more specific rather than adding a duplicate
c. If the escape was a logic category bug (something the existing review agents should have caught), also write a learning to docs/LEARNINGS.md with category review-escape noting which agent should have caught it and why it might have been missed (confidence threshold too high? wrong framing?)
14.10. If docs/REVIEW_GUIDANCE.md does NOT exist, skip this phase. In the final Report, set Review Escape Mining to: "Skipped — docs/REVIEW_GUIDANCE.md not found. Run /harness:init or /harness:review to enable adversarial review learning."
14.11. Report escape mining results (include in the final Report output):
14.12. Resolve the harness runtime directory:
bash HARNESS_DIR=$(bash ${CLAUDE_PLUGIN_ROOT}/scripts/harness-resolve-dir.sh --repo-root .)
14.13. If HARNESS_DIR is not empty:
- Invoke /harness:evolve as a command and follow its full process.
<MANDATORY>
You MUST invoke `/harness:evolve` (do not inline its behavior). Do NOT classify learnings or generate proposals inline — the evolve command has the persistence script integration, evolver agent dispatch, and auto-apply safety checks that prevent both under-evolution and over-evolution.
</MANDATORY>
14.14. If HARNESS_DIR is empty: skip silently. Repos without .harness/ work exactly as before — backward compatible.
14.15. Include evolve results in the Report output (append after the Review Escape Mining section):
### Evolution - {evolve report summary, or "Skipped — no .harness/ runtime"}
Ask user for their perspective:
The plan had {N} tasks. {M} completed, {D} deviated, {K} surprises logged.
What worked well? What didn't? Any learnings to capture?
Fill in the plan's Outcomes & Retrospective section with:
docs/design-docs/index.md exists, check whether the diff supersedes any earlier design docs:
Superseded by column or marker to the older entry17.5. Update frontmatter status on design docs touched by this work:
Design Doc: header references a design doc:
status from current to implementedimplemented-by: {plan path} to the frontmatterdocs/design-docs/ for topic overlaps:
status to superseded and add supersedes: {path to newer doc}status: implemented or status: superseded are correctly archived — do NOT flag as stalestatus: current that reference deleted code are genuinely staleUpdate docs/PLANS.md:
/harness:complete)Update docs/DESIGN.md if any new patterns or key decisions were established.
Update docs/ARCHITECTURE.md if any new modules were created or boundaries changed.
Output:
## Reflect Complete
**Scope:** {diff description, e.g., "3 commits, 12 files changed"}
**Docs loaded:** {list of Tier 2/3 files checked}
### Updates Made
- {list of doc files modified with one-line description of what changed}
- {or "No updates needed — docs are current"}
### Deleted-Code Cleanup
- {list of stale references removed, or "No deleted modules detected"}
### Plan Updates
- {surprises/drift added, or "No active plan" or "No drift detected"}
### Conversation Learnings
- Doc updates: {N}
- ADR candidates: {N}
- No-action: {N}
### Learnings Written
- {N} new learnings added to docs/LEARNINGS.md
- {list of learning IDs and one-line summaries}
### Review Escape Mining
- Escapes detected: {N}
- Questions added to REVIEW_GUIDANCE.md: {N}
- Categories affected: {list, or "None — no review escapes detected"}
### Evolution
- {evolve report summary, or "Skipped — no .harness/ runtime"}
### Retrospective
- Captured in plan's Outcomes & Retrospective section
## Next Step
Run `/harness:complete` to archive the plan and create the PR.
Update run-state (if .harness/ runtime exists):
HARNESS_DIR=$(bash ${CLAUDE_PLUGIN_ROOT}/scripts/harness-resolve-dir.sh --repo-root .)
[ -n "$HARNESS_DIR" ] && bash ${CLAUDE_PLUGIN_ROOT}/scripts/harness-update-state.sh \
--harness-dir "$HARNESS_DIR" \
--phase "reflect"