From learning-loop
Imports external context (Linear tickets, repo scans, or arbitrary content) into a second brain. Handles PDFs, images, code, conversations, docs, and raw text.
npx claudepluginhub robinslange/learning-loop --plugin learning-loopThis skill uses the workspace's default tool permissions.
Pulls data from external sources (Linear, repositories, or any content Claude can read), extracts atomic insights, previews them for confirmation, then routes to auto-memory and/or vault notes. The context mode accepts anything: PDFs, images, code files, conversation dumps, documents, or plain text.
Guides Next.js Cache Components and Partial Prerendering (PPR): 'use cache' directives, cacheLife(), cacheTag(), revalidateTag() for caching, invalidation, static/dynamic optimization. Auto-activates on cacheComponents: true.
Processes PDFs: extracts text/tables/images, merges/splits/rotates pages, adds watermarks, creates/fills forms, encrypts/decrypts, OCRs scans. Activates on PDF mentions or output requests.
Share bugs, ideas, or general feedback.
Pulls data from external sources (Linear, repositories, or any content Claude can read), extracts atomic insights, previews them for confirmation, then routes to auto-memory and/or vault notes. The context mode accepts anything: PDFs, images, code files, conversation dumps, documents, or plain text.
/ingest linear: pull my assigned Linear tickets/ingest linear "Project Name": pull tickets from a specific project/ingest linear --state "In Progress": filter by ticket state/ingest repo ~/path/to/repo: scan a repository/ingest repo: prompt for repo path/ingest context: provide any content (paste text, give a file path, drop an image)/ingest: ask which source type--refine: append to any source mode (e.g., /ingest context --refine) to enable Step 5.6 upstream refinement after ingest. Off by default; will move to default-on after a few validation runs.Parse the source type from the first argument.
No argument (/ingest):
Use AskUserQuestion:
What would you like to ingest?
- linear: Pull Linear tickets (my assigned, or a specific project)
- repo: Scan a repository for architecture and patterns
- context: Provide any content (text, PDF, image, code, doc) to extract insights from
Source type provided: Parse remaining args as source-specific parameters.
Linear:
--state "X" → state filterRepo:
AskUserQuestion: "Which repository? (full path)"lsContext:
AskUserQuestion: "What would you like to ingest? You can paste text, provide a file path (PDF, image, code, doc), or describe what you'd like to import."Spawn the appropriate agent in the foreground:
Linear: Spawn a general-purpose agent with prompt:
Read the agent definition at PLUGIN/agents/ingest-linear.md and follow it exactly.
Scope: {scope}
State filter: {state_filter or "none"}
Repo: Spawn a general-purpose agent with prompt:
Read the agent definition at PLUGIN/agents/ingest-repo.md and follow it exactly.
Repo path: {repo_path}
Context: Spawn a general-purpose agent with prompt:
Read the agent definition at PLUGIN/agents/ingest-context.md and follow it exactly.
Source label: {source_label or "pasted text"}
Text:
{pasted_text}
Take the insights JSON returned by the agent.
Read PLUGIN/agents/_skills/preview-format.md and format the preview accordingly.
Display the preview to the user. Wait for confirmation via AskUserQuestion:
Type numbers to exclude (e.g., "drop vault 2, 4"), or "all" to confirm everything, or "none" to cancel.
Parse the user's response:
Determine the project name:
AskUserQuestion if not obviousSpawn a general-purpose agent with prompt:
Read the agent skill at PLUGIN/agents/_skills/route-output.md and follow it exactly.
Project name: {project_name}
Vault path: {{VAULT}}/
Memory path: {memory_path}
Confirmed insights:
{confirmed_insights_json}
The routing agent in Step 5 is a subagent. Its Write/Edit tool calls bypass PostToolUse hooks, so notes it creates miss post-write-autolink.js and post-write-edge-infer.js: ending up without suggested backlinks or typed edges.
Replay the hook chain on any vault notes missing structural backlinks. Idempotent: safe on already-hooked notes.
# Resolve vault path from config. The ll-search shim (~/.local/bin/ll-search,
# installed by /init or the SessionStart hook) handles binary location and ORT
# env vars itself.
PLUGIN_DATA="${CLAUDE_PLUGIN_DATA:-$(node "${CLAUDE_PLUGIN_ROOT}/scripts/resolve-paths.mjs" PLUGIN_DATA)}"
LL_VAULT="$(node -e "const c=JSON.parse(require('fs').readFileSync(process.argv[1]+'/config.json','utf-8'));console.log(c.vault_path.replace(/^~/,require('os').homedir()))" "$PLUGIN_DATA")"
# Ensure new notes are indexed before the sweep + any downstream similarity queries.
ll-search index "$LL_VAULT" "$LL_VAULT/.vault-search/vault-index.db" 2>&1 | tail -1
SWEEP_CANDIDATES="${TMPDIR:-/tmp}/ll-${CLAUDE_SESSION_ID:-session}-sweep-candidates.txt"
LL_VAULT="$LL_VAULT" python3 - <<'PY' > "$SWEEP_CANDIDATES"
import os, re
root = os.environ["LL_VAULT"]
for d in ["0-inbox", "1-fleeting", "2-literature", "3-permanent", "5-maps"]:
for dirpath, _, files in os.walk(os.path.join(root, d)):
for f in files:
if not f.endswith(".md"): continue
p = os.path.join(dirpath, f)
try:
body = open(p).read()
body = re.sub(r"^---\n.*?\n---\n", "", body, count=1, flags=re.DOTALL)
if not re.search(r"\[\[[^\]]+\]\]", body):
print(p)
except: pass
PY
if [ -s "$SWEEP_CANDIDATES" ]; then
node "${CLAUDE_PLUGIN_ROOT}/scripts/sweep-hook-replay.mjs" --stdin < "$SWEEP_CANDIDATES"
fi
rm -f "$SWEEP_CANDIDATES"
Report any failures in Step 6. Typical cost: <1s per file, usually 0–5 candidates per batch (ingest typically produces few subagent-written notes that the routing step hasn't already linked via its prompt).
Behind a flag for the first ship. Skip this step entirely unless the user invoked /ingest with --refine in the args. Default off because ingest batches can produce many candidates and we want cost visibility before promoting to default-on.
When the routing subagent in Step 5 writes new vault notes, those notes may sharpen, qualify, or extend existing claims. This step finds those pairs, dispatches the refinement-proposer agent, validates the output, and applies edits via Write. Same flow as /reflect Step 4.6.
The routing subagent doesn't return file paths directly. Use git diff against HEAD to detect new files in the vault since ingest started:
All temp files in 5.6 use a session-keyed prefix so parallel /ingest invocations don't race. Each bash block re-derives the same paths from $CLAUDE_SESSION_ID (stable across the session); when passing paths into agent prompts or other tools, substitute the resolved literal value.
LL_TMP_PREFIX="${TMPDIR:-/tmp}/ll-${CLAUDE_SESSION_ID:-session}-ingest"
cd "$HOME/brain"
git diff --name-only --diff-filter=A HEAD -- brain/0-inbox/ brain/1-fleeting/ brain/2-literature/ brain/3-permanent/ brain/5-maps/ \
| sed "s|^|$HOME/brain/|" \
> "${LL_TMP_PREFIX}-new-notes.txt"
If the file is empty, skip the rest of 5.6 and report Refinement: 0 new notes from ingest.
Caveat: this assumes the vault was at clean HEAD state when ingest started. If the user had uncommitted vault work, it may include unrelated files. The hard cap on LLM calls (50, below) bounds the worst case.
LL_TMP_PREFIX="${TMPDIR:-/tmp}/ll-${CLAUDE_SESSION_ID:-session}-ingest"
node "${CLAUDE_PLUGIN_ROOT}/scripts/refinement-candidates.mjs" --stdin --pairs-out "${LL_TMP_PREFIX}-refinement-pairs.json" < "${LL_TMP_PREFIX}-new-notes.txt" > /dev/null
If the resulting pairs JSON has more than 50 entries, truncate to the first 50 (highest cosine first since the candidate script sorts that way) and append the deferred remainder to ${CLAUDE_PLUGIN_DATA:-$(node "${CLAUDE_PLUGIN_ROOT}/scripts/resolve-paths.mjs" PLUGIN_DATA)}/refinement-deferred.jsonl as one JSON object per line. The deferred queue is drained by the next /reflect invocation (which has no batch cap).
LL_TMP_PREFIX="${TMPDIR:-/tmp}/ll-${CLAUDE_SESSION_ID:-session}-ingest"
DATA_DIR="${CLAUDE_PLUGIN_DATA:-$(node "${CLAUDE_PLUGIN_ROOT}/scripts/resolve-paths.mjs" PLUGIN_DATA)}"
mkdir -p "$DATA_DIR"
LL_PAIRS_PATH="${LL_TMP_PREFIX}-refinement-pairs.json" python3 - <<'PY'
import json, os
pairs_path = os.environ["LL_PAIRS_PATH"]
pairs = json.load(open(pairs_path))
keep, defer = pairs[:50], pairs[50:]
json.dump(keep, open(pairs_path, "w"), indent=2)
data_dir = os.environ["CLAUDE_PLUGIN_DATA"]
defer_path = os.path.join(data_dir, "refinement-deferred.jsonl")
if defer:
with open(defer_path, "a") as f:
for p in defer: f.write(json.dumps(p) + "\n")
print(f"deferred {len(defer)} pairs to {defer_path}")
PY
Same as /reflect Step 4.6.b through 4.6.f. Spawn refinement-proposer with the pairs file, validate via refinement-validate.mjs, present preview-format table, apply approved edits via Write, route counterpoints via Edit, emit provenance events.
The subagent_type is learning-loop:refinement-proposer. The pairs_file is the resolved value of ${TMPDIR:-/tmp}/ll-${CLAUDE_SESSION_ID:-session}-ingest-refinement-pairs.json (substitute the literal path before passing to the agent). Likewise for the agent output (-refinement-agent-output.json) and validated output (-refinement-validated.json). Use AskUserQuestion for batch confirmation.
LL_TMP_PREFIX="${TMPDIR:-/tmp}/ll-${CLAUDE_SESSION_ID:-session}-ingest"
rm -f "${LL_TMP_PREFIX}-new-notes.txt" "${LL_TMP_PREFIX}-refinement-pairs.json" "${LL_TMP_PREFIX}-refinement-agent-output.json" "${LL_TMP_PREFIX}-refinement-validated.json"
Report counts in Step 6.
Display the routing agent's summary, the sweep results, and the refinement results (if --refine was passed). Done.