From sage
Gathers sources like migration guides, GitHub issues, changelogs, and user context; filters to extract agent mistake corrections for pack building, discarding standard docs.
npx claudepluginhub xoai/sageThis skill uses the workspace's default tool permissions.
Gather sources and extract agent-failure-relevant insights.
Plans Claude Code packs by identifying type to build, checking existing packs, classifying layers (domain/framework/stack), forking community/project paths, and capturing agent failures. Triggers on pack creation/customization requests.
Extracts patterns, quirks, and decisions from conversations; persists to Markdown files in knowledge/learnings/. Use /learn for quick or /learn --deep for thorough analysis.
Mandates invoking relevant skills via tools before any response in coding sessions. Covers access, priorities, and adaptations for Claude Code, Copilot CLI, Gemini CLI.
Share bugs, ideas, or general feedback.
Gather sources and extract agent-failure-relevant insights.
Core Principle: Not all sources are equal. Official docs explain HOW things work — agents already know that from training. Blog posts about mistakes, migration guides, GitHub issues, and changelog breaking changes reveal WHAT GOES WRONG — that's what packs need.
For community packs, prioritize sources in this order:
Migration guides (highest value) — they document what changed between versions and what old patterns are now wrong. This is exactly what agents get wrong: using outdated patterns from training data.
GitHub issues tagged "common mistake" or "FAQ" — real users hitting real problems means agents will hit them too.
Framework changelog / breaking changes — what APIs were removed or renamed. Agents still use removed APIs.
Blog posts about pitfalls and best practices — especially posts titled "X mistakes with [framework]" or "Stop doing X in [framework]."
Official docs (lowest priority for packs) — use ONLY to verify that the corrections are accurate. Don't extract patterns from docs — they're documentation, not judgment.
For project overlays, the sources are the user's own materials:
For each source, ask ONE question:
"Does this tell me something agents get WRONG, or does it explain how something WORKS?"
Extract into a structured format:
## Source: [title/url]
## Relevance: [high/medium/low]
### Insight 1
Agent mistake: [what agents do wrong]
Correction: [what to do instead]
Evidence: [how we know agents do this — migration guide, common issue, etc.]
### Insight 2
...
After processing all sources:
Token awareness: The pack has a budget (L1: 3500, L2: 5000, L3: 1500). Each pattern costs ~80-120 tokens, each anti-pattern ~60-90. Budget for 7-9 patterns + 5-7 anti-patterns + constitution. Don't extract 20 insights — pick the best 7-9.
For project overlays, the processing is different:
The overlay should be ONLY the delta — what's specific to this project.
Save to .sage/pack-build/sources.md:
# Processed Sources
## Top Agent Failures (ranked)
1. [failure] — Severity: [high/med] — Sources: [N] mentions
2. [failure] — ...
## Candidate Patterns
- [pattern idea from source processing]
- ...
## Candidate Anti-Patterns
- [anti-pattern idea from observation]
- ...
## Project-Specific Rules (overlay only)
- [convention or constraint]
- ...