From learning-loop
Reviews conversation history after substantial work sessions, extracts learnings (corrections, preferences, decisions), and routes them to auto-memory or Obsidian vault for consolidation.
npx claudepluginhub robinslange/learning-loop --plugin learning-loopThis skill uses the workspace's default tool permissions.
Structured checkpoint that extracts what was learned in this session and persists it to the right stores. This is how the learning loop closes: without this step, knowledge captured during the session rots.
Guides Next.js Cache Components and Partial Prerendering (PPR): 'use cache' directives, cacheLife(), cacheTag(), revalidateTag() for caching, invalidation, static/dynamic optimization. Auto-activates on cacheComponents: true.
Processes PDFs: extracts text/tables/images, merges/splits/rotates pages, adds watermarks, creates/fills forms, encrypts/decrypts, OCRs scans. Activates on PDF mentions or output requests.
Share bugs, ideas, or general feedback.
Structured checkpoint that extracts what was learned in this session and persists it to the right stores. This is how the learning loop closes: without this step, knowledge captured during the session rots.
This skill emits provenance events for pipeline observability. Run each Bash command silently.
At session start:
node "${CLAUDE_PLUGIN_ROOT}/scripts/provenance-emit.js" '{"agent":"reflect","skill":"reflect","action":"session-start"}'
At session end:
node "${CLAUDE_PLUGIN_ROOT}/scripts/provenance-emit.js" '{"agent":"reflect","skill":"reflect","action":"session-end","vault_notes":N,"auto_memories":N}'
Per-note tracking is handled automatically by the PostToolUse hook.
Work through these steps in order. Be concise throughout: the vault voice is Hemingway, not Tolstoy.
Silently review the conversation. Identify:
If the session was purely routine (config change, typo fix, quick lookup), say so and skip to Step 5. Not every session produces learnings.
Identify what was learned. Categories:
| Category | Example | Destination | Confidence |
|---|---|---|---|
| Correction received | "Don't mock the DB in these tests" | Auto-memory (feedback) | strong |
| Preference revealed | "I prefer X approach over Y" | Auto-memory (user/feedback) | strong |
| Decision made | "We chose Postgres over SQLite because..." | Obsidian vault | - |
| Problem solved | "The build failed because X, fixed by Y" | Obsidian vault | - |
| Pattern discovered | "This pagination pattern works across projects" | Obsidian vault | - |
| Domain insight | "Resto Druid HoT uptime benchmarks are..." | Obsidian vault | - |
| Project context | "Auth rewrite is driven by compliance, not tech debt" | Auto-memory (project) | medium |
| Cross-project connection | "Same caching problem exists in Kinso and Solenoid" | Obsidian vault + links | - |
| Implicit pattern | User always runs tests before committing (observed 3+ times, never stated) | Auto-memory (feedback) | weak |
List each learning as a single line.
Run a single retrieval call for all learnings identified in Step 2. Pass each learning summary as a query:
node PLUGIN/scripts/vault-search.mjs reflect-scan "learning 1 summary" "learning 2 summary" ... --top 5
Parse the JSON result. For each query:
top_match_similarity > 0.90: likely duplicate. Read the existing note and update it instead of creating a new one.top_match_similarity 0.70-0.90: related note exists. Consider linking rather than duplicating.top_match_similarity < 0.70: no existing coverage. Create a new note.Review confusable_pairs in the result. If any pairs are found, flag them for the user as potential MERGE or SHARPEN candidates in the Step 5 report.
If the episodic memory MCP tool is available (mcp__plugin_episodic-memory_episodic-memory__search), run one search for the session's primary topic/domain. Extract any relevant prior decisions or unresolved questions. If unavailable, skip silently.
Using the reflect-scan results from Step 2.5:
top_match_similarity > 0.90, read the matched note. If the existing note already captures the insight, skip creating a new one.For auto-memory items:
confidence in frontmatter based on signal strength:
strong: user explicitly stated the preference or correction ("I always want...", "Don't ever...", "No, do it this way")medium: user corrected your output (changed X to Y, rejected an approach) or provided project contextweak: pattern inferred from repeated behavior (observed 3+ times but never explicitly stated by user)medium throughout the systemFor Obsidian vault items:
{{VAULT}}/0-inbox/ using the Write tool4-projects/ if one exists${TMPDIR:-/tmp}/ll-${CLAUDE_SESSION_ID:-session}-reflect-new-notes.txt; use the same env-keyed expansion in every block that touches it so parallel /reflect invocations don't race.# Initialize at the start of Step 4 (truncates any stale file from a prior reflect in this session):
LL_TMP_PREFIX="${TMPDIR:-/tmp}/ll-${CLAUDE_SESSION_ID:-session}-reflect"
: > "${LL_TMP_PREFIX}-new-notes.txt"
# After each vault Write:
echo "<absolute-path-to-just-written-note>" >> "${LL_TMP_PREFIX}-new-notes.txt"
Subagent Write/Edit tool calls bypass PostToolUse hooks. Notes written earlier in this session by note-writer, discovery-researcher, literature-capturer, or any other subagent may have missed post-write-autolink.js and post-write-edge-infer.js entirely: ending up without suggested backlinks or typed edges.
Replay the hook chain on any vault notes missing structural backlinks. Idempotent: safe to run on already-hooked notes.
# Resolve vault path from config. The ll-search shim (~/.local/bin/ll-search,
# installed by /init or the SessionStart hook) handles binary location and ORT
# env vars itself.
PLUGIN_DATA="${CLAUDE_PLUGIN_DATA:-$(node "${CLAUDE_PLUGIN_ROOT}/scripts/resolve-paths.mjs" PLUGIN_DATA)}"
LL_VAULT="$(node -e "const c=JSON.parse(require('fs').readFileSync(process.argv[1]+'/config.json','utf-8'));console.log(c.vault_path.replace(/^~/,require('os').homedir()))" "$PLUGIN_DATA")"
# Ensure new notes are indexed before the sweep + any downstream similarity queries.
# Incremental by default; only embeds notes that are new or mtime-changed.
ll-search index "$LL_VAULT" "$LL_VAULT/.vault-search/vault-index.db" 2>&1 | tail -1
SWEEP_CANDIDATES="${TMPDIR:-/tmp}/ll-${CLAUDE_SESSION_ID:-session}-sweep-candidates.txt"
# Detect unlinked candidates (exclude 4-projects: free-form indexes)
LL_VAULT="$LL_VAULT" python3 - <<'PY' > "$SWEEP_CANDIDATES"
import os, re
root = os.environ["LL_VAULT"]
for d in ["0-inbox", "1-fleeting", "2-literature", "3-permanent", "5-maps"]:
for dirpath, _, files in os.walk(os.path.join(root, d)):
for f in files:
if not f.endswith(".md"): continue
p = os.path.join(dirpath, f)
try:
body = open(p).read()
body = re.sub(r"^---\n.*?\n---\n", "", body, count=1, flags=re.DOTALL)
if not re.search(r"\[\[[^\]]+\]\]", body):
print(p)
except: pass
PY
if [ -s "$SWEEP_CANDIDATES" ]; then
node "${CLAUDE_PLUGIN_ROOT}/scripts/sweep-hook-replay.mjs" --stdin < "$SWEEP_CANDIDATES"
fi
rm -f "$SWEEP_CANDIDATES"
Expected output is a JSON summary {processed, ok, failed, failures}. Report failures in Step 5 if any. Typical cost: <1s per file, usually 0–5 candidates per session.
After writing new vault captures, scan each new note's body for intention patterns:
If an intention pattern is found, extract to frontmatter:
intentions:
- "<extracted project/topic>: <the full intention sentence>"
status: intentioned
This ensures new notes with intentions appear in the next session's intention summary. Claude can drill into specific contexts on-demand.
When a new vault note touches a claim already in the vault, the existing claim should be refined to incorporate the new evidence. This step finds those pairs, asks the refinement-proposer agent to draft edits, validates them, presents the batch for confirmation, and applies via Write. Contradictions route to inline counter-argument linking instead of editing the upstream body.
Skip this entire step if the reflect new-notes file (${TMPDIR:-/tmp}/ll-${CLAUDE_SESSION_ID:-session}-reflect-new-notes.txt) does not exist or is empty (the session wrote no vault notes).
LL_TMP_PREFIX="${TMPDIR:-/tmp}/ll-${CLAUDE_SESSION_ID:-session}-reflect"
node "${CLAUDE_PLUGIN_ROOT}/scripts/refinement-candidates.mjs" --stdin --pairs-out "${LL_TMP_PREFIX}-refinement-pairs.json" < "${LL_TMP_PREFIX}-new-notes.txt" > /dev/null
If the resulting refinement-pairs.json is [], report Refinement: 0 candidates in band in Step 5 and skip the rest of 4.6.
Spawn the refinement-proposer agent with subagent_type: "learning-loop:refinement-proposer" and the prompt below. The pairs_file placeholder must be substituted with the resolved literal path (${TMPDIR:-/tmp}/ll-${CLAUDE_SESSION_ID:-session}-reflect-refinement-pairs.json after expansion):
Read the agent definition at PLUGIN/agents/refinement-proposer.md and follow it exactly.
pairs_file: <resolved-pairs-path>
vault_path: {{VAULT}}/
Return the JSON response only, no commentary, no markdown fences.
Capture the agent's stdout response. Write it to ${TMPDIR:-/tmp}/ll-${CLAUDE_SESSION_ID:-session}-reflect-refinement-agent-output.json (resolve before writing).
LL_TMP_PREFIX="${TMPDIR:-/tmp}/ll-${CLAUDE_SESSION_ID:-session}-reflect"
node "${CLAUDE_PLUGIN_ROOT}/scripts/refinement-validate.mjs" "${LL_TMP_PREFIX}-refinement-agent-output.json" "${LL_TMP_PREFIX}-refinement-pairs.json" > "${LL_TMP_PREFIX}-refinement-validated.json"
The validator strips em-dashes, computes sentence delta, and tags each decision with status ok, oversized_warning, or auto_rejected. The cleaned proposed bodies replace the agent's originals.
Read the validated JSON at ${TMPDIR:-/tmp}/ll-${CLAUDE_SESSION_ID:-session}-reflect-refinement-validated.json. Build a preview-format table from the decisions array:
## Refinement Proposals (N total)
### Edits ({edit_ok} ok, {edit_oversized} oversized warnings, {edit_auto_rejected} auto-rejected)
| # | upstream | type | Δ% | summary |
|---|----------|------|----|---------|
| 1 | websocket-has-no-built-in-reconnection | extends | 12% | Added Vercel/CF/AWS proxy timeout numbers |
| 2 | (warn) digital-signatures-prove-authorship | qualifies | 28% | Added challenge-response gap discussion |
### Counterpoints ({counterpoint_ok})
| # | upstream | reason |
|---|----------|--------|
| 3 | concept-creep-and-diagnostic-bracket-creep | new note disputes the bracket-vs-vertical distinction |
### Auto-rejected ({edit_auto_rejected})
| # | upstream | Δ% | reason |
|---|----------|----|--------|
| 4 | ... | 73% | exceeded 50% body change ceiling |
**Actions**: type `apply all` to apply every ok + oversized item, `apply ok` to apply only `ok` items, `apply N M` for specific IDs, `diff N` to print the unified diff for one item, or `none` to cancel.
Use AskUserQuestion for the action selection.
If the user types diff N, print the unified diff between the upstream's current body and the validated proposed_body for decision N, then re-prompt.
For each decision in the approved set:
proposed_body to upstream_path using the Write tool. The post-write hook chain re-fires (autolink, edge-infer, provenance).new_note_link_text to the new note's body via Edit, and append upstream_link_text to the upstream's body via Edit. Do NOT modify the upstream's claim. Both edits should append to the body, not modify existing lines. Skip if a link with the same target already exists in either file.For each applied refinement:
node "${CLAUDE_PLUGIN_ROOT}/scripts/provenance-emit.js" '{"agent":"refinement-proposer","skill":"reflect","action":"refinement-applied","target":"<upstream-path>","new_note":"<new-note-path>","subtype":"<edit_subtype>","cosine":<cosine>}'
For counterpoints emit action: "counterpoint-linked". For auto-rejected emit action: "refinement-rejected" with reason: "oversized".
LL_TMP_PREFIX="${TMPDIR:-/tmp}/ll-${CLAUDE_SESSION_ID:-session}-reflect"
rm -f "${LL_TMP_PREFIX}-new-notes.txt" "${LL_TMP_PREFIX}-refinement-pairs.json" "${LL_TMP_PREFIX}-refinement-agent-output.json" "${LL_TMP_PREFIX}-refinement-validated.json"
Report counts in Step 5: Refinement: N edits applied, M counterpoints linked, K passed, J auto-rejected.
Output a brief summary:
Reflected on [domain/project] session.
Captured: [N items] → [where they went]
Connections: [any cross-project links made]
Merge/Sharpen candidates: [any confusable_pairs flagged, or "none"]
Keep it to 2-4 lines. The user can see the diffs if they want details.
Write a timestamp so the Stop hook knows reflection already happened:
node -e "require('fs').writeFileSync(require('path').join(require('os').tmpdir(), 'learning-loop-last-reflect'), Math.floor(Date.now()/1000).toString())"
Run this via the Bash tool at the end of every /reflect invocation.
None. All retrieval is handled by the reflect-scan binary command in the main thread.
0-inbox/ without permission.