Use when user invokes /learn or wants to save patterns, decisions, gotchas, procedures, or feature knowledge from a conversation for later re-use. Look for user requests like "please remember" or "what did we learn from this?".
Captures durable project knowledge into reusable skills triggered by /learn or user requests to remember insights.
/plugin marketplace add codename-inc/spectre/plugin install spectre@spectreThis skill inherits all available tools. When active, it can use any tool Claude has access to.
references/recall-template.mdYou capture durable project knowledge into Skills that Claude Code loads on-demand.
If you loaded a skill earlier in this session (via Skill({name})) and subsequently:
You should update that skill directly before the session ends — do not ask the user for permission. Just do it.
When updating proactively:
This keeps knowledge fresh without requiring users to remember to call /learn.
Enable someone with zero context to become productive on this topic.
Every learning you create should allow a new team member (human or AI) to complete a task without asking follow-up questions. If they'd need to dig further to actually DO something, the learning isn't complete.
These principles apply to ALL categories. Structure varies by category, but depth is universal.
What's the ONE thing they must know? Put it first, not buried. Don't make them read 5 paragraphs to find the key point.
Why does this exist? What problem does it solve? 2-3 sentences max, then move on. Someone with zero context needs to understand WHY before HOW.
Include something they can DO: commands to run, code to copy, steps to follow. Information without action is trivia. If there's nothing actionable, question whether it's worth capturing.
Examples > explanations. A code snippet is worth 100 words of description. Every learning should have at least one concrete example.
What will trip them up? Call out pitfalls explicitly. The best learnings prevent errors, not just explain concepts.
Headers, tables, code blocks. Someone should get 80% of the value in 60 seconds of skimming. Dense paragraphs bury knowledge.
Before proposing ANY learning, ask yourself:
If any answer is no, add more depth or reconsider capturing it.
{{project_root}} refers to the current working directory (process.cwd() / $PWD).
Resolution order:
CLAUDE_PROJECT_DIR environment variable (if set)$PWD)Do NOT use git rev-parse --show-toplevel or any git command to resolve this path.
</CRITICAL>
Each learning becomes its own skill at the project level:
{{project_root}}/.claude/skills/
├── spectre-recall/
│ ├── SKILL.md # Recall skill (discovery + embedded registry)
│ └── references/
│ └── registry.toon # Registry source of truth
├── {category}-{slug}/ # Learning = Skill
│ └── SKILL.md
├── {category}-{slug}/ # Learning = Skill
│ └── SKILL.md
└── ...
The registry is stored at {{project_root}}/.claude/skills/spectre-recall/references/registry.toon
Before proposing a learning, read the registry to check for existing learnings:
{{project_root}}/.claude/skills/spectre-recall/references/registry.toon
Format: {skill-name}|{category}|{triggers}|{description} (one learning per line)
Example: feature-spectre-plugin|feature|spectre, /learn, /recall|Use when modifying spectre plugin or debugging hooks
With arguments: Use the explicit topic/content as the knowledge to capture. Without arguments: Analyze recent conversation (last 10-20 messages) to identify what's worth preserving.
Determine if you have sufficient context to create a quality learning.
Ask yourself: Can I answer the category's required questions (from Section 6) using:
| Situation | Action |
|---|---|
| Topic was discussed in detail in recent messages | Proceed to Step 4 (Apply Capture Criteria) |
| You already understand the topic from this session | Proceed to Step 4 (Apply Capture Criteria) |
| Topic is unfamiliar / not discussed / you'd be guessing | Trigger Investigation Mode (Step 2b) |
When you lack context, investigate the codebase using subagents before creating a learning.
Classify the topic into a likely category. If ambiguous, ask the user:
I'll investigate "{topic}" in the codebase. Which type of learning?
- feature (how it works end-to-end)
- gotchas (debugging knowledge)
- patterns (repeatable solutions)
- decisions (architectural choices)
- procedures (multi-step processes)
- integration (external systems)
Dispatch an Explore agent to map relevant files:
Task(subagent_type="Explore", prompt="""
Find all files related to "{topic}" in this codebase:
- Entry points (routes, CLI commands, exports, event handlers)
- Core logic (main implementation files)
- Tests (unit tests, integration tests)
- Config (configuration, environment, constants)
- Docs (READMEs, comments, existing documentation)
Return a file map with:
- File path
- Brief description of what the file does
- Relevance to {topic} (high/medium/low)
Focus on HIGH and MEDIUM relevance files.
""")
Based on the category, dispatch 2-3 general-purpose agents in parallel. Each agent gets:
For feature investigations:
Agent 1: "What is {topic} and what problem does it solve? How do users interact with it?
Cite entry points and user-facing code."
Agent 2: "What is the technical architecture? How do components connect?
Cite core implementation files."
Agent 3: "What are common tasks someone would need to do? What files would they modify?
Cite specific functions/files for each task."
For gotcha investigations:
Agent 1: "What are the symptoms when {topic} goes wrong? What errors appear?
Cite error handling code and logs."
Agent 2: "What is the root cause? What non-obvious behavior exists?
Cite the specific code that causes confusion."
Agent 3: "What is the solution? How do you fix or work around it?
Cite the correct approach with code examples."
For other categories: Generate investigation questions from the category's required sections.
After subagents return:
Cross-reference - Connect insights across agents. Look for:
Resolve conflicts - If agents contradict each other:
Identify gaps - What required sections couldn't be answered?
Structure findings - Map synthesized knowledge to the category template from Section 6
After synthesis, proceed to Step 7 (Generate Skill Name).
Must meet at least 2 of 4:
| Criterion | Question |
|---|---|
| Frequency | Will this come up again? |
| Pain | Did it cost real debugging time? |
| Surprise | Was it non-obvious? |
| Durability | Still true in 6 months? |
Capture: Patterns, decisions with rationale, debugging insights, conventions, tribal knowledge. Skip: One-off solutions, generic knowledge, temporary workarounds, simple preferences (-> CLAUDE.md).
ONLY use these categories. Do not invent new ones.
| Category | Categorize as this when the knowledge is about... |
|---|---|
| feature | How a feature works end-to-end: design, flows, key files |
| gotchas | Hard-won debugging knowledge, non-obvious pitfalls |
| patterns | Repeatable solutions used across the codebase |
| decisions | Architectural choices + rationale |
| procedures | Multi-step processes (deploy, release, etc.) |
| integration | Third-party APIs, vendor quirks, external systems |
| performance | Optimization learnings, benchmarks, scaling decisions |
| testing | Test strategies, coverage decisions, QA patterns |
| ux | Design patterns, user research insights, interactions |
| strategy | Roadmap decisions, prioritization rationale |
Category selection guide:
featuredecisionsgotchasproceduresintegrationEach category has expected sections. These are minimums - add more depth as needed to meet the Content Principles.
Feature learnings are comprehensive "dossiers" that enable someone to work on a feature without prior context.
Required sections:
Gotchas capture hard-won debugging knowledge.
Required sections:
Patterns document repeatable solutions.
Required sections:
Decisions preserve architectural choices and rationale.
Required sections:
Procedures document multi-step processes.
Required sections:
Integrations document external system connections.
Required sections:
Follow the Content Principles. Include:
The skill name follows the pattern {category}-{slug}:
Naming rules (CRITICAL for discoverability):
VALID: feature-auth-flows, gotchas-hook-timeout, patterns-retry-logic
INVALID: auth-flows (no category), feature/auth-flows (no slashes), feature_auth_flows (no underscores)
Rules:
session-restore, handling-timeoutsRead the registry to find candidates, then read the actual skill file to compare content.
Registry scan - look for:
If candidate found, read {{project_root}}/.claude/skills/{skill-name}/SKILL.md and check:
UPDATE - New knowledge contradicts, extends, or supersedes an existing learning
APPEND - New learning belongs in same skill but is distinct
CREATE - No semantic match in registry
Decision priority: UPDATE > APPEND > CREATE (prefer consolidation over proliferation)
Before proposing, verify the learning is accurate. This is especially important for Investigation Mode learnings.
Verification checklist:
Spot-check key claims (2-3 minimum)
Verify file purposes
Trace one flow (for feature learnings)
If verification fails:
> **Note**: The {specific area} couldn't be fully verified.
> This may need confirmation.
Confidence calibration based on verification:
| Verification Result | Confidence |
|---|---|
| All claims verified, flows traced | high |
| Most verified, minor gaps | medium |
| Significant uncertainty, partial verification | low |
For Investigation Mode learnings, default to medium unless verification is thorough.
Stop and wait for user response. Format depends on action type:
For UPDATE (revising existing learning):
I'd update the skill: `{skill-name}`
**Current**: {1-2 sentence summary of existing}
**Proposed**: {1-2 sentence summary of revision}
**Reason**: {contradicts|extends|supersedes} - {why}
{Updated content preview - FULL content, not summary}
Update this? [Y/n/edit]
For APPEND (adding to existing skill):
I'd append to the skill: `{skill-name}`
**{Title}**
{Full content following category structure}
Trigger: {keywords}
Confidence: {low|medium|high}
Save this? [Y/n/edit]
For CREATE (new skill):
I'd create a new skill: `{skill-name}`
**{Title}**
{Full content following category structure}
Trigger: {keywords}
Confidence: {low|medium|high}
Create this? [Y/n/edit]
Confidence (determined in Step 9 - Verify Learning):
y/yes -> write as proposedn/no -> canceledit or custom text -> modify firstLocation: {{project_root}}/.claude/skills/{skill-name}/SKILL.md
Skill Template:
---
name: {skill-name}
description: Use when {triggering conditions - MUST start with "Use when"}
user-invocable: false
---
# {Title}
**Trigger**: {keywords}
**Confidence**: {level}
**Created**: {YYYY-MM-DD}
**Updated**: {YYYY-MM-DD}
**Version**: 1
{Content - follows category-specific structure from Section 6}
UPDATE - Revise existing skill:
**Created** date**Updated** to today**Version** by 1APPEND - For skills with multiple sections, add new section:
---
## {New Section Title}
**Trigger**: {keywords}
**Confidence**: {level}
**Created**: {YYYY-MM-DD}
**Updated**: {YYYY-MM-DD}
**Version**: 1
{Explanation}
After writing the skill file, register it in the project registry and regenerate the recall skill. This is two file operations — no external scripts needed.
<CRITICAL> **Registry description format:**The description is used to MATCH knowledge to tasks. It must describe WHEN to use the knowledge, not what it contains.
Good: "Use when modifying spectre plugin, debugging hooks, or adding knowledge categories"
Good: "Use when auth fails silently or tokens expire unexpectedly"
Bad: "spectre plugin architecture" (describes content, not when to use)
Bad: "Authentication system overview" (too vague, no triggering conditions)
</CRITICAL>
Path: {{project_root}}/.claude/skills/spectre-recall/references/registry.toon
Create the directory and file if they don't exist. The registry format is one entry per line:
# SPECTRE Knowledge Registry
# Format: skill-name|category|triggers|description
{skill-name}|{category}|{triggers}|{description}
Path: {{project_root}}/.claude/skills/spectre-recall/SKILL.md
Read the full registry content from registry.toon, then write the recall skill with this exact structure:
---
name: spectre-recall
description: Use when user wants to search for existing knowledge, recall a specific learning, or discover what knowledge is available.
---
# Recall Knowledge
Search and load relevant knowledge from the project's spectre learnings into your context.
## Registry
{full registry content here}
## How to Use
1. **Scan registry above** — match triggers/description against your current task
2. **Load matching skills**: `Skill({skill-name})`
3. **Apply knowledge** — use it to guide your approach
## Search Commands
- `/recall {query}` — search registry for matches
- `/recall` — show all available knowledge by category
## Workflow
**Single match** → Load automatically via `Skill({skill-name})`
**Multiple matches** → List options, ask user which to load
**No matches** → Suggest `/learn` to capture new knowledge
Saved .claude/skills/{skill-name}/SKILL.md
Registered in .claude/skills/spectre-recall/references/registry.toon
Activates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
Search, retrieve, and install Agent Skills from the prompts.chat registry using MCP tools. Use when the user asks to find skills, browse skill catalogs, install a skill for Claude, or extend Claude's capabilities with reusable AI agent components.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.