From smith
Analyzes completed Smith workflows (build, bugfix, audit) to extract lessons on successes and failures, updating Ledger with patterns, antipatterns, tool preferences, edge cases, and project quirks.
npx claudepluginhub attckdigital/smithThis skill uses the workspace's default tool permissions.
Analyzes completed Smith workflows and extracts lessons into the project's Ledger.
Browse, search, and manage the Smith Ledger knowledge base of patterns, antipatterns, tool preferences, edge cases from past workflows. Review insights, prune outdated entries, inspect evolution.
Generates adaptive-depth session retrospective reports (retro.md) from plan.md and lessons.md, converting outcomes into persistent process improvements. Supports deep/light modes and directory resolution logic.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Share bugs, ideas, or general feedback.
Analyzes completed Smith workflows and extracts lessons into the project's Ledger.
Arguments: $ARGUMENTS
Throughout this action, log significant events to the vault session log. Read the session log path from .smith/vault/.current-session. If the file is missing or the vault is not initialized, skip all logging silently.
Append entries using this format:
### [HH:MM:SS] /smith-reflect <event>
**User Request:**
> <verbatim user message that triggered this action>
**Synthesized Input:** <brief summary>
**Outcome:** <what happened>
**Artifacts:** <files created/modified>
**Systems affected:** <system IDs>
Log at these points:
If .smith/vault/ledger/ is missing, create the full directory scaffold before proceeding.
Create these files with initial content:
patterns.md:
# Ledger: Patterns
Approaches that worked well, extracted from completed workflows.
<!-- Entries are sorted: high confidence first, then medium, then low. Within same confidence, most recent first. -->
antipatterns.md:
# Ledger: Antipatterns
Approaches that failed or caused problems, extracted from completed workflows.
<!-- Entries are sorted: high confidence first, then medium, then low. Within same confidence, most recent first. -->
tool-preferences.md:
# Ledger: Tool Preferences
Which tools were effective in which context, extracted from completed workflows.
<!-- Entries are sorted: high confidence first, then medium, then low. Within same confidence, most recent first. -->
edge-cases.md:
# Ledger: Edge Cases
Rare or unexpected scenarios encountered during workflows.
<!-- Entries are sorted: high confidence first, then medium, then low. Within same confidence, most recent first. -->
project-quirks.md:
# Ledger: Project Quirks
Project-specific surprises and behaviors that affect workflow execution.
<!-- Entries are sorted: high confidence first, then medium, then low. Within same confidence, most recent first. -->
meta.yaml:
last_updated: <today's date YYYY-MM-DD>
total_reflections: 0
entries:
patterns: 0
antipatterns: 0
tool_preferences: 0
edge_cases: 0
project_quirks: 0
confidence_distribution:
high: 0
medium: 0
low: 0
This lazy-creation guard ensures reflection works even if /smith init was never run.
.smith/config.json -- look for the ledger namespaceledger.enabled is not explicitly false, proceed (defaults to enabled)ledger.auto_reflect -- if false, skip silently and log "Auto-reflect disabled, skipping"ledger.reflection_model to determine which model to use for semantic comparison (default: haiku)Config schema (all fields optional):
{
"ledger": {
"enabled": true,
"auto_reflect": true,
"reflection_model": "haiku",
"pruning": {
"low_max_age_days": 30,
"medium_max_age_days": 90
}
}
}
Determine which sessions to analyze based on arguments:
/smith-reflect (no args) -- analyze the most recent completed workflow session/smith-reflect --last N -- analyze the last N sessions/smith-reflect --session <id> -- analyze a specific session by filename (e.g., 2026-04-08_151158)/smith-reflect --failure -- analyze only sessions that contain failed workflowsFinding sessions: List .smith/vault/sessions/ sorted by date descending. Each session file is named with a timestamp prefix (e.g., 2026-04-08_151158.md).
Detecting completed workflows: Grep session logs for invocation markers:
/smith-build -- full build workflows/smith-bugfix -- bugfix workflows/smith-audit -- audit workflows/smith-implement -- implementation workflowsOnly analyze sessions that contain at least one of these workflow markers. Sessions that are purely Q&A, planning, or specification work are skipped (no execution to learn from).
Detecting failures: Grep for ERROR, FAILED, failed, error, exception, traceback in session logs, OR look for workflows that have an invocation marker but no corresponding completion marker (e.g., /smith-build started but no "Build complete" or "PR merged" entry).
If no qualifying sessions are found, output "No completed workflow sessions found to analyze." and exit.
For each selected session:
Read the full session log from .smith/vault/sessions/<session-file>
Extract the execution trace by scanning for these signals:
Success indicators:
Failure/retry indicators:
Tool usage patterns:
Identify 0-5 candidate lessons per session. Not every session produces a lesson -- if everything went smoothly with no surprises, that is fine. Do not manufacture lessons.
A good candidate lesson meets at least one criterion:
Map each candidate lesson to the appropriate Ledger file:
| File | What goes here | Example |
|---|---|---|
| patterns.md | Approaches that worked well | "Running pnpm build before Docker rebuild catches TS errors earlier" |
| antipatterns.md | Approaches that failed or caused problems | "Editing migration files without checking existing data caused constraint violations" |
| tool-preferences.md | Effective tool usage in context | "Use Grep with output_mode: content and -C 3 for understanding error context, not just finding files" |
| edge-cases.md | Rare/unexpected scenarios | "Qdrant returns 400 if payload filter references a field that was never indexed" |
| project-quirks.md | Project-specific surprises | "Neo4j Cypher queries fail silently on missing properties -- always use COALESCE" |
Each lesson also gets a category tag:
implementation -- code writing, architecture decisionstesting -- test strategy, test failures, coverage gapsdebugging -- error investigation, root cause analysisspecification -- spec accuracy, plan-to-implementation driftaudit -- review findings, compliance checksFor each candidate lesson:
Read the target Ledger file (e.g., patterns.md)
Parse all existing entries (split on --- separator between entries)
Compare the candidate against ALL existing entries using semantic judgment:
Decision:
If duplicate found -- Reinforce the existing entry:
Source reflections count by 1lowmediumhighIf no duplicate -- Mark as new entry (will be written in Phase 5)
For each lesson (new or reinforced), write to the appropriate Ledger file.
Append to the appropriate file, separated from previous entries by ---:
---
**Title:** <concise descriptive name -- 5-10 words>
**Date:** <YYYY-MM-DD>
**Category:** <implementation | testing | debugging | specification | audit>
**Confidence:** low
**Source reflections:** 1
**Context:** <1-2 sentences describing when this pattern/antipattern applies -- what kind of task, what conditions, what service/layer>
**Pattern:** <2-4 sentences describing the actual approach that worked (for patterns) or the thing that failed and why (for antipatterns). Be specific enough to act on. Include the "instead, do X" for antipatterns.>
**Evidence:**
- <YYYY-MM-DD> -- session <session-filename>: <brief outcome description, 1 line>
**Related:** <links to related entries in other Ledger files (e.g., "See antipatterns.md: 'Title'"), or "None">
Update existing entry in-place:
Source reflections countConfidence if threshold crossed (2 = medium, 6 = high)Date to today (date of most recent reinforcement)After all writes are complete, re-sort entries within each modified file:
Date firstRun automatic pruning on ALL Ledger files (not just the ones modified in this reflection):
Read pruning thresholds from .smith/config.json:
ledger.pruning.low_max_age_days (default: 30)ledger.pruning.medium_max_age_days (default: 90)For each entry in each Ledger file:
| Confidence | Age threshold | Action |
|---|---|---|
low | Older than low_max_age_days | REMOVE the entry entirely |
medium | Older than medium_max_age_days | DEMOTE to low, set Date to today (resets the clock) |
high | N/A | NEVER touch, regardless of age |
"Age" is calculated from the entry's Date field (which is updated on reinforcement), not the original creation date.
Track pruning actions for the summary:
Update .smith/vault/ledger/meta.yaml with current counts:
last_updated: <today's date YYYY-MM-DD>
total_reflections: <previous total + number of sessions analyzed in this run>
entries:
patterns: <count entries in patterns.md>
antipatterns: <count entries in antipatterns.md>
tool_preferences: <count entries in tool-preferences.md>
edge_cases: <count entries in edge-cases.md>
project_quirks: <count entries in project-quirks.md>
confidence_distribution:
high: <count across all files>
medium: <count across all files>
low: <count across all files>
Count entries by counting **Title:** lines in each file.
After updating meta.yaml, also update .smith/vault/ledger/.meta.json to track reconciliation trigger signals:
.smith/vault/ledger/.meta.json does not exist, create it with defaults:
{
"schema_version": 1,
"last_reconcile": null,
"estimated_tokens": 0,
"total_patterns": 0,
"total_reinforcements": 0,
"context_budget_violations": 0,
"reinforcements_since_reconcile": 0,
"lock": null
}
estimated_tokens: compute word count × 1.3 across all Ledger .md files (patterns.md, antipatterns.md, tool-preferences.md, edge-cases.md, project-quirks.md)total_patterns: count of **Title:** lines across all Ledger .md filestotal_reinforcements: increment by the number of reinforcements performed in this reflection runreinforcements_since_reconcile: increment by the number of reinforcements performed in this reflection runlock, last_reconcile, or context_budget_violations — those are owned by other skillsOutput a reflection summary to the user:
Reflection complete.
- Sessions analyzed: N
- New patterns: N (files: patterns.md, tool-preferences.md, ...)
- Reinforced: N existing entries
- Pruned: N stale entries
- Demoted: N entries (medium -> low)
- Ledger health: X total entries (H high, M medium, L low)
Log the full summary to the vault session log.
If zero lessons were extracted across all analyzed sessions, output:
Reflection complete.
- Sessions analyzed: N
- No new lessons extracted -- workflows executed cleanly.
- Pruned: N stale entries
- Ledger health: X total entries (H high, M medium, L low)
Other smith commands (/smith-build, /smith-bugfix, /smith-audit) may call /smith-reflect automatically at the end of their execution. When invoked automatically:
ledger.auto_reflect in config -- if false, skip silentlyreflection_model (default: haiku) to keep cost and latency low