From skill-extractor
Extracts reusable skills from work sessions for non-obvious problems, patterns, workarounds, or debugging techniques. Manual invocation via /skill-extractor.
npx claudepluginhub trailofbits/skills-curated --plugin skill-extractorThis skill is limited to using the following tools:
Extracts reusable knowledge from work sessions and saves it as a Claude Code skill.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
Extracts reusable knowledge from work sessions and saves it as a Claude Code skill.
Use these prompts to identify knowledge worth extracting:
If you can't answer at least two of these with something non-trivial, it's probably not worth extracting.
/skill-extractor [--project] [context hint]
~/.claude/skills/[name]/SKILL.md--project: saves to .claude/skills/[name]/SKILL.md/skill-extractor the cyclic data DoS fix)Before creating a new skill, search for existing ones that might cover the same ground:
# Check user skills
ls ~/.claude/skills/
# Check project skills
ls .claude/skills/
# Search by keyword
grep -r "keyword" ~/.claude/skills/ .claude/skills/ 2>/dev/null
If a related skill exists, consider updating it instead of creating a new one. See skill-lifecycle.md for guidance on when to update vs create.
If $ARGUMENTS contains a context hint (e.g., "the cyclic data DoS fix"), use it to focus the extraction on that specific topic.
Analyze the conversation to identify:
Present a brief summary to the user:
I identified this potential skill:
**Problem:** [Brief description]
**Key insight:** [What made it non-obvious]
**Triggers:** [Error messages or symptoms]
Evaluate the candidate skill against these criteria:
| Criterion | Pass? | Evidence |
|---|---|---|
| Reusable - Helps future tasks, not just this instance | [Why] | |
| Non-trivial - Required discovery, not docs lookup | [Why] | |
| Verified - Solution actually worked | [Evidence] | |
| Specific triggers - Exact error messages or scenarios | [What they are] | |
| Explains WHY - Trade-offs and judgment, not just steps | [How] | |
| Value-add - Teaches judgment, not just facts Claude could look up | [How] |
Present assessment to user and ask: "Proceed with extraction? [yes/no]"
The user decides whether to proceed regardless of how many criteria pass. Respect their judgment - if they say yes, extract; if no, skip.
Ask the user:
--project)If the topic involves a specific library or framework:
Skip research for:
Use the template from skill-template.md.
Quality standards: Follow quality-guide.md to ensure the skill provides lasting value. Key points:
Run through the validation checklist in skill-template.md. If validation fails, fix the issues before saving.
Create the directory and save:
~/.claude/skills/[name]/SKILL.md.claude/skills/[name]/SKILL.mdReport success:
Skill saved to: [path]
The skill will be available in future sessions when the context matches:
"[first line of description]"
When extracting, consider how the new knowledge relates to existing skills:
Combine or separate?
Update vs create:
Cross-referencing:
Skills aren't permanent. See skill-lifecycle.md for guidance on:
If you catch yourself thinking any of these, do NOT extract:
Scenario: User discovered that an AST visitor crashes with RecursionError when analyzing serialized files containing cyclic references (e.g., a list that contains itself).
Identified learning:
Generated skill name: cyclic-ast-visitor-hardening
Key sections:
visited: set parameter, check before recursing