From sage
Plans Claude Code packs by identifying type to build, checking existing packs, classifying layers (domain/framework/stack), forking community/project paths, and capturing agent failures. Triggers on pack creation/customization requests.
npx claudepluginhub xoai/sageThis skill uses the workspace's default tool permissions.
Determine what pack to build and which path to follow.
Generates Claude Code pack files from observation reports and processed sources, including patterns, anti-patterns, constitution, manifest, and tests for community packs or project overlays.
Refines vague Claude Code prompts into structured, project-context-aware instructions by scanning package.json, CLAUDE.md, imports, and directory structure.
Evaluates Claude Code packages across 6 quality dimensions like frontmatter and structure for all 7 package types, producing scored audit reports. Use for quick single-package audits or full repository scans.
Share bugs, ideas, or general feedback.
Determine what pack to build and which path to follow.
Core Principle: Every pack starts from observed agent failures, not from documentation summaries. If you can't name a specific mistake agents make, you don't have a pack — you have a reference doc.
Ask the user:
"Are you building a shareable pack for a framework, or customizing an existing pack for your project's specific conventions?"
Ask:
Look in packs/ directory for an existing pack covering this framework.
Also check if a Layer 1 pack exists that this should build on.
Apply the three-layer test:
"Does this apply to any project in the domain regardless of framework?" Yes → Layer 1 (domain foundation). Examples: web, mobile, api, data.
"Does this apply to projects using this specific framework?" Yes → Layer 2 (framework pack). Examples: react, nextjs, vue, express.
"Does this apply only when these specific tools are used together?" Yes → Layer 3 (stack composition). Examples: nextjs+prisma, flutter+firebase.
Record: framework name, version, layer, L1 dependency (if L2/L3).
For project overlays, ask:
Ask the user to provide their context:
Record: target pack name, project context sources.
This is the most important step. Ask:
"What mistakes have you seen AI agents make with [framework]? Be specific — describe the bad code agents produce, not general problems."
If the user isn't sure, prompt with:
Record at least 3-5 specific agent failures. These become anti-patterns and drive pattern selection.
Save to .sage/pack-build/brief.md:
# Pack Brief
## Path: [community-pack / project-overlay]
## Framework: [name]
## Version: [version]
## Layer: [1/2/3]
## Dependencies: [L1 pack, etc.]
## Observed Agent Failures
1. [specific failure]
2. [specific failure]
3. [specific failure]
## Sources to Process
- [urls, docs, or "user will provide"]
## Project Context (overlay only)
- [conventions provided]
- [constraints provided]