From sanyuan0704-sanyuan-skills-1
Guides building production-grade Claude Code skills with architecture design, workflow checklists, prompt engineering, and packaging scripts.
npx claudepluginhub joshuarweaver/cascade-data-analytics --plugin sanyuan0704-sanyuan-skills-1This skill uses the workspace's default tool permissions.
IRON LAW: Every line in a skill must justify its token cost. If it doesn't make the model's output better, more consistent, or more reliable — cut it.
Conducts multi-round deep research on GitHub repos via API and web searches, generating markdown reports with executive summaries, timelines, metrics, and Mermaid diagrams.
Dynamically discovers and combines enabled skills into cohesive, unexpected delightful experiences like interactive HTML or themed artifacts. Activates on 'surprise me', inspiration, or boredom cues.
Generates images from structured JSON prompts via Python script execution. Supports reference images and aspect ratios for characters, scenes, products, visuals.
IRON LAW: Every line in a skill must justify its token cost. If it doesn't make the model's output better, more consistent, or more reliable — cut it.
A skill is an "onboarding guide" for Claude — transforming it from a general-purpose agent into a specialized one with procedural knowledge, domain expertise, and bundled tools.
skill-name/
├── SKILL.md # Required: workflow + instructions (<500 lines)
├── scripts/ # Optional: deterministic, repeatable operations
├── references/ # Optional: loaded into context on demand
└── assets/ # Optional: used in output, never loaded into context
Default assumption: Claude is already very smart. Only add what Claude doesn't already know. Challenge every paragraph: "Does this justify its token cost?"
Copy this checklist and check off items as you complete them:
Skill Forge Progress:
- [ ] Step 1: Understand the Skill ⚠️ REQUIRED
- [ ] 1.1 Clarify purpose and concrete use cases
- [ ] 1.2 Collect 3+ concrete usage examples
- [ ] 1.3 Identify trigger scenarios and keywords
- [ ] Step 2: Plan Architecture
- [ ] 2.1 Identify reusable resources (scripts, references, assets)
- [ ] 2.2 Design progressive loading strategy
- [ ] 2.3 Design parameter system (if applicable)
- [ ] Step 3: Initialize ⛔ BLOCKING (skip if skill already exists)
- [ ] Run init_skill.py
- [ ] Step 4: Write Description
- [ ] Load references/description-guide.md
- [ ] Apply keyword bombing technique
- [ ] Step 5: Write SKILL.md Body
- [ ] 5.1 Set Iron Law
- [ ] 5.2 Design workflow checklist
- [ ] 5.3 Add confirmation gates
- [ ] 5.4 Add parameter system (if applicable)
- [ ] 5.5 Apply writing techniques
- [ ] 5.6 Add anti-patterns list
- [ ] 5.7 Add pre-delivery checklist
- [ ] Step 6: Build Resources
- [ ] 6.1 Implement and test scripts
- [ ] 6.2 Write reference files
- [ ] 6.3 Prepare assets
- [ ] Step 7: Review ⚠️ REQUIRED
- [ ] Run pre-delivery checklist (Step 9)
- [ ] Present summary to user for confirmation
- [ ] Step 8: Package
- [ ] Run package_skill.py
- [ ] Step 9: Iterate based on real usage
Ask yourself:
If unclear, ask the user (don't ask everything at once — start with the most critical):
Do NOT proceed until you have at least 3 concrete examples.
For each concrete example, ask:
scripts/references/assets/Key constraints:
references/Skip if working on an existing skill. Otherwise run:
python3 scripts/init_skill.py <skill-name> --path <output-directory>
The script creates a template with Iron Law placeholder, workflow checklist, and proper directory structure.
This is the most underestimated part of a skill. The description determines:
Load references/description-guide.md for the keyword bombing technique and good/bad examples.
Key rule: NEVER put "When to Use" info in the SKILL.md body. The body loads AFTER triggering — too late.
Load reference files as needed for each sub-step:
Ask: "What is the ONE mistake the model will most likely make with this skill?" Write a rule that prevents it. Place it at the top of SKILL.md, right after the frontmatter.
→ Load references/writing-techniques.md for Iron Law patterns and red flag signals.
Create a trackable checklist with:
→ Load references/workflow-patterns.md for checklist patterns and examples.
Force the model to stop and ask the user before:
→ Load references/workflow-patterns.md for confirmation gate patterns.
If the skill benefits from flags like --quick, --style, --regenerate N:
→ Load references/parameter-system.md for $ARGUMENTS, flags, argument-hint, and partial execution patterns.
Three techniques that dramatically improve output quality:
→ Load references/writing-techniques.md for all three with examples.
Ask: "What would Claude's lazy default look like for this task?" Then explicitly forbid it.
→ Load references/writing-techniques.md for anti-pattern examples.
Add concrete, verifiable checks. Each item must be specific enough that the model can check it by looking at the output. Not "ensure good quality" but "no placeholder text remaining (TODO, FIXME, xxx)."
→ Load references/output-patterns.md for checklist patterns and priority-based output.
→ Load references/architecture-guide.md for detailed patterns.
Present the skill summary to the user and confirm before packaging.
name and description only (plus optional allowed-tools, license, metadata)python3 scripts/package_skill.py <path/to/skill-folder> [output-directory]
Validates automatically before packaging. Fix errors and re-run.
After real usage: