From forge-core
Extract session learnings and apply them as updates to rules, skills, and agents. USE WHEN session produced reusable patterns, corrections, or conventions worth capturing.
npx claudepluginhub n4m3z/forge-coreThis skill uses the workspace's default tool permissions.
Extract reusable learnings from the current session and apply them as updates to rules, skills, and agents in the current repo.
Guides Next.js Cache Components and Partial Prerendering (PPR): 'use cache' directives, cacheLife(), cacheTag(), revalidateTag() for caching, invalidation, static/dynamic optimization. Auto-activates on cacheComponents: true.
Processes PDFs: extracts text/tables/images, merges/splits/rotates pages, adds watermarks, creates/fills forms, encrypts/decrypts, OCRs scans. Activates on PDF mentions or output requests.
Share bugs, ideas, or general feedback.
Extract reusable learnings from the current session and apply them as updates to rules, skills, and agents in the current repo.
| Workflow | Trigger | Section |
|---|---|---|
| Analyze | "learn from this session", "extract learnings" | Analyze |
| Targeted | "add a rule for X", "update the agent to note Y" | Apply Changes |
First pass — scan for user-correction signals. Re-read the conversation specifically looking for: user messages containing "no", "don't", "actually", "wait", "stop", "that's wrong"; follow-up questions that reveal you missed something ("did you X?", "is it still Y?"); user edits or rewrites of your output; requests that imply a prior step should have been done differently. These are the highest-value learnings — they encode behaviors the user actively wants changed. Surface them first.
Second pass — walk the scan checklist below. Walk through each category and list concrete findings before filtering.
For each finding, apply the reusability test: will this come up again in a different session? If no, skip it. If uncertain, include it: a skipped learning is lost, an extra proposal can be rejected.
Before drafting any proposal, list every file in rules/, skills/, and agents/ of the target module. For each candidate learning, search by topic for an existing artifact that already touches the same area. The default outcome is a one-line edit to an existing file, NOT a new file.
Concrete signals you should be editing not creating:
Only create a new file when the learning is genuinely orthogonal to everything that exists. If you find yourself writing a rule shorter than ~3 sentences, it almost certainly belongs as a paragraph in an existing file.
Determine the target artifact. The categorization decision matters — rules cost tokens every session; skills cost tokens only when invoked.
| Learning type | Target | Example |
|---|---|---|
| Always-relevant constraint | rules/ | "every text file ends with newline" |
| Task-specific guidance | skills/*/SKILL.md with paths: | "shell scripting pitfalls — auto-trigger on **/*.sh" |
| Skill workflow improvement | skills/*/ | "add note about tool limitation" |
| Agent instruction update | agents/ | "add guidance about deployment scope" |
| Existing file refinement | edit in place | "add RACI to required frontmatter list" |
If the guidance only matters when working on certain files (shell scripts, Python, Markdown), make it a skill with paths: frontmatter so it auto-triggers on relevant file edits — don't put it in rules/ where it loads on every session regardless of relevance.
LearnFrom always edits source files in the module root (rules/, skills/, agents/). Never edit deployed copies under .claude/, .codex/, .gemini/, or .opencode/ — those are forge install outputs that get overwritten on every sync.
Start with the source artifact relevant to the current workload, then check its provenance record (.provenance/<file>.yaml sidecar). If resolvedDependencies lists an upstream module (forge-core, forge-dev, etc.), the artifact is inherited content. Propose the change upstream; the local copy refreshes via forge install after the upstream merges. Edit the local copy directly only when the change is genuinely repo-specific (project-only convention, local override).
Skipping this check creates drift: a local edit to inherited content silently shadows future upstream improvements, and a local edit to a deployed copy is overwritten on the next sync.
For each identified learning, draft a concrete proposal in this priority order:
Show the existing-file scan result alongside each proposal so the user can see what was considered and rejected.
Rules must fire on a concrete trigger. Abstract principles ("be careful with X") don't change behavior; concrete signals do ("when you're about to write Y, check Z"). Iterate the wording with the user — first drafts are usually too abstract OR too tied to one specific case. Filename should match the final framing — rename if the concept shifts during iteration.
Present proposals in batches via AskUserQuestion (4 questions max per call).
For each proposal, show the target file and proposed change with a preview field carrying the literal content that would be written. Options: "Capture", "Adjust", "Skip".
For "Adjust": ask what to change, then re-present.
After the first batch, dig deeper before declaring done. Most sessions have 2-3 obvious learnings and several non-obvious ones surfaced only by re-scanning corrections, pushback, and stuck moments. Ask yourself "what else did the user have to correct?" before stopping.
For each confirmed proposal:
rules/ using the Write toolskills/ or agents/ using the Write toolAfter writing, verify the file exists and has correct content.
List what was captured: rules created or updated, skills updated, agents updated, items skipped.
rules/, skills/, and agents/ BEFORE drafting — a new file is the last resort, not the first instinct.claude/, .codex/, .gemini/, or .opencode/.mdschema).mdschema in rules/ if present.mdschema before writing