From learning-loop
Assesses vault note quality for structure, depth, sourcing, linking, atomicity and verifies claims against cited sources including URL validity. Detects duplicates, issues; produces scores and fix plans for /deepen. Usage: /verify [note|inbox|permanent|topic].
npx claudepluginhub robinslange/learning-loop --plugin learning-loopThis skill uses the workspace's default tool permissions.
Assesses vault notes on two dimensions: structural quality (depth, sourcing, linking, voice, atomicity) and source truthfulness (are URLs real, do claims match citations). Produces a combined report and fix plan that feeds into `/deepen`.
Guides Next.js Cache Components and Partial Prerendering (PPR): 'use cache' directives, cacheLife(), cacheTag(), revalidateTag() for caching, invalidation, static/dynamic optimization. Auto-activates on cacheComponents: true.
Processes PDFs: extracts text/tables/images, merges/splits/rotates pages, adds watermarks, creates/fills forms, encrypts/decrypts, OCRs scans. Activates on PDF mentions or output requests.
Share bugs, ideas, or general feedback.
Assesses vault notes on two dimensions: structural quality (depth, sourcing, linking, voice, atomicity) and source truthfulness (are URLs real, do claims match citations). Produces a combined report and fix plan that feeds into /deepen.
/verify "note-name": single note/verify inbox: everything in 0-inbox//verify permanent: everything in 3-permanent//verify "topic": all notes matching a topic/verify: defaults to 0-inbox//deepen effortThis skill emits provenance events for pipeline observability.
At session start (after scope identified):
node "${CLAUDE_PLUGIN_ROOT}/scripts/provenance-emit.js" '{"agent":"verify","skill":"verify","action":"session-start","intent":"SCOPE","config":{"note_count":N}}'
After scoring and verification, emit each finding via provenance-emit.js:
For each note with issues, run:
node "PLUGIN/scripts/provenance-emit.js" '{"agent":"verify","skill":"verify","action":"score","target":"note-filename.md","result":"fail","finding_type":"overclaim","finding_detail":"single RCT stated as consensus","trigger":"verify-manual","confidence":"clear","ambiguous_alt":""}'
Where:
url-fabrication, author-swap, number-reassignment, overclaim, source-missing, stale, logical-gap, conflationverify-auto (URL/source check), verify-manual (human review), cross-note (pattern across notes), retrieval (found during search), stale-scan (date check)clear (obvious classification) or ambiguous (could be another type)ambiguous, empty string when clearFor quality scores, emit one event per note:
node "PLUGIN/scripts/provenance-emit.js" '{"agent":"verify","skill":"verify","action":"score","target":"note-filename.md","tier":"deep","gate":"6/6","claim_specificity":2,"source_grounded":2}'
A note with no finding events is a pass.
Then emit session-end:
node "${CLAUDE_PLUGIN_ROOT}/scripts/provenance-emit.js" '{"agent":"verify","skill":"verify","action":"session-end","notes_checked":N,"notes_flagged":N,"findings_total":N,"fixes_applied":N}'
A note with zero score records is a pass.
No argument (/verify):
Use AskUserQuestion:
What would you like to verify?
inbox: check all inbox notes (default)permanent: check all permanent notes"note-name": check a specific note"topic": check all notes matching a topic
Argument provided: Proceed immediately.
| Input | Scope |
|---|---|
/verify "note-name" | Single note |
/verify inbox | Everything in 0-inbox/ |
/verify permanent | Everything in 3-permanent/ |
/verify "topic" | All notes matching the topic across the vault |
/verify | Default to 0-inbox/ |
Glob for **/<note-name>*.md in {{VAULT}}/, Read itGlob for *.md in the target folderGrep with path: "{{VAULT}}/" and pattern: "<topic>" + Glob for filenames + node PLUGIN/scripts/vault-search.mjs search "<topic>" --rerank for semantic matches. Deduplicate results.Read each note.
Spawn note-scorer agent(s) to assess the gathered notes. When spawning multiple agents, dispatch them all in the same turn (a single message with multiple Agent tool calls):
note-scorer agent with all file paths.note-scorer agent per batch in the same turn.note-scorer agent per batch in the same turn. Haiku handles 50 notes per batch; the bottleneck is Read calls, not reasoning.Each agent reads its own notes, applies promote-gate assessment (6-criterion pass/fail + scoring mode dimensions), and returns per-note gate results (N/6 pass count, claim_specificity 0-2, source_grounded 0-2) with a maturity tier (shallow/medium/deep).
Wait for all scoring agents to complete before proceeding.
After all scorer agents return, parse their results and emit one provenance event per note via provenance-emit.js. Run all emit calls in a single Bash command (chained with &&) to avoid excessive tool calls:
node "PLUGIN/scripts/provenance-emit.js" '{"agent":"verify","skill":"verify","action":"score","target":"note-1.md","tier":"deep","gate":"6/6","claim_specificity":2,"source_grounded":2}' && \
node "PLUGIN/scripts/provenance-emit.js" '{"agent":"verify","skill":"verify","action":"score","target":"note-2.md","tier":"shallow","gate":"2/6","claim_specificity":0,"source_grounded":0}'
This closes the subagent provenance gap -- scorer agents return text results, the main thread emits them to the provenance system.
Check for cross-note issues using Smart Connections embeddings:
node PLUGIN/scripts/vault-search.mjs similar "<note-path>" --top 5Near-duplicates (similarity > 0.85):
Contradictions (similarity > 0.7, conflicting claims):
Filter notes to those with sources/citations. Skip sourceless notes (report as "no sources: skipped").
Spawn note-verifier agent(s). When spawning multiple, dispatch them all in the same turn:
Each agent receives the note content and returns the structured verification report (source checks, claim checks, missing citations, corrections).
Merge outputs from all agents into a single report:
## Verify: [scope]
### Summary
- N notes assessed
- Quality: Deep N | Medium N | Shallow N
- Sources: Pass N | Issues N | Skipped (no sources) N
### Quality Scores
| Note | Tier | Gate | Specificity | Grounded | Issues |
|------|------|------|-------------|----------|--------|
| [[note]] | shallow | 2/6 (missing: sourcing, voice, source integrity, depth) | 0 | 0 | no sources, topic-as-title |
### Consistency
- [[note-A]] ↔ [[note-B]] (0.91 similarity): near-duplicate, merge candidate
- [[note-C]] ↔ [[note-D]] (0.78 similarity): potential contradiction: [specific conflict]
### Source Issues
#### [[note-name]]: N issues
| Type | Detail | Severity |
|------|--------|----------|
| Dead URL | [url] returns 404 | high |
| Unsupported claim | "[claim]": source actually says [what] | high |
| Missing citation | "[claim]" has no source | medium |
### Clean
- [[note-name]]: all sources verified
Notes with wrong_author or fabricated sources should be flagged in the top section regardless of quality score: a well-written note with fabricated sources is worse than a thin note with real ones.
Prioritize notes by combined quality + source issues:
## Fix Plan (prioritized)
1. [[note-name]]: fabricated source + shallow → `/deepen "note-name"`
2. [[note-name]]: 2 dead URLs → `/deepen "note-name"`
3. [[note-name]]: shallow, no links → `/deepen "note-name"`
For each note needing work, suggest the right tool:
| Issue | Recommendation |
|---|---|
| Thin, needs research | /deepen "note-name" |
| Missing sources | /literature to capture, then link |
| Covers multiple ideas | Split (manual or via /deepen) |
| Wrong folder for maturity | Promote or demote |
| Duplicate of another note | Merge candidate: flag for user |
Group scored notes by filename prefix to find coherent knowledge clusters ready for batch promotion.
Detection method:
zustand-*, gemini-*, cbc-*, apollo-*, concept-creep-*)Present as:
### Promotion Clusters Detected
| Cluster | Notes | Deep | % | Action |
|---------|-------|------|---|--------|
| zustand | 12 | 12 | 100% | promote all? |
| gemini | 6 | 6 | 100% | promote all? |
| cbc | 4 | 3 | 75% | review 1 outlier |
On approval, mv all qualifying files from the main thread (not subagents) so the PostToolUse hook captures the promotions. Log a batch provenance event:
node "${CLAUDE_PLUGIN_ROOT}/scripts/provenance-emit.js" '{"agent":"verify","skill":"verify","action":"batch-promote","cluster":"CLUSTER_NAME","count":N,"from":"1-fleeting","to":"3-permanent"}'
Clusters below the 80% threshold are reported but not offered for batch promotion. Individual deep notes within those clusters can still be promoted in the normal batch actions step.
Quick actions available:
- Promote N deep notes to 3-permanent/ (say "promote all")
- Fix N notes with source issues (say "fix all" or pick specific notes)
- Review N consistency issues (contradictions/duplicates)
- Flag N shallow notes for /deepen queue
Execute promotions freely. Merges and deletions require user approval.
If user approves fixing:
/deepen sequentially on each flagged noteScores a batch of vault notes using promote-gate scoring mode.
Launch pattern:
Agent (subagent_type: "learning-loop:note-scorer"):
"Score these notes: <file-path-1>, <file-path-2>, ...
Return per-note: dimension scores + maturity tier (shallow/medium/deep) + specific issues found."
Batching rules:
Verifies source URLs, checks claims against cited sources, catches fabrication.
Launch pattern:
Agent (subagent_type: "learning-loop:note-verifier"):
"Verify these notes:
Note 1: <path>
<content>
Note 2: <path>
<content>
Return per-note: source checks, claim checks, missing citations, corrections."
Batching rules:
Notes processed by the updated note-writer may contain inline markers from write-time verification:
[unresolved] -- source not found in any database. Verify manually: search the web, check if it's a non-academic source.[unverified] -- source found but metadata mismatch persisted. Run verify-note on the note and inspect the specific issue.[not in abstract] -- number not confirmable from abstract alone. Fetch the full text if accessible, or check the number against the source page via web fetch.When reporting, include marker counts in the summary. These markers indicate the write-time check already ran. Focus verification effort on resolving the markers rather than re-checking what already passed.
/deepen's job.