From better-skills
Create new agent skills with best-practice templates. Guides through skill level selection (L0 pure prompt, L0+ with helper scripts, L1 with business scripts), environment strategy (stdlib/uv/venv), and generates ready-to-edit project files following runtime UX best practices. This skill should be used when creating a new skill, scaffolding a skill project, initializing skill templates, or when the user says 'help me build a skill', 'create a skill', '创建技能', '新建 skill'.
npx claudepluginhub psylch/better-skills --plugin better-skillsThis skill uses the workspace's default tool permissions.
**Match user's language**: Respond in the same language the user uses.
references/best_practices.mdreferences/improvement_patterns.mdreferences/validation_rules.mdscripts/scaffold.pytemplates/l0/SKILL.md.tmpltemplates/l0plus/SKILL.md.tmpltemplates/l0plus/helper.sh.tmpltemplates/l1/SKILL.md.tmpltemplates/l1/env.example.tmpltemplates/l1/main_stdlib.py.tmpltemplates/l1/main_uv.py.tmpltemplates/l1/main_venv.py.tmpltemplates/l1/run.sh.tmplCreates, updates, or validates SKILL.md agent skills including frontmatter authoring, bundled resource planning, and three-phase discoverability validation.
Guides creation of effective Claude Code skills: SKILL.md structure, YAML metadata, bundled scripts/references/assets for workflows and tools. Use when building or updating skills.
Generates new agent skills with SKILL.md templates, file structures, progressive disclosure, review checklists, and optional utility scripts.
Share bugs, ideas, or general feedback.
Match user's language: Respond in the same language the user uses.
Create new agent skills by guiding the user through a series of choices, then generating a ready-to-edit project structure with best practices baked in.
scaffold.py with the collected parameters (non-interactive)Progress:
Follow these steps in order. Use AskUserQuestion for steps 1–5.
Before any structural decisions, understand what the skill will do. Ask the user:
This step is critical — it determines the level recommendation and produces a better description field.
Ask the user for a skill name. Validate it meets these rules:
If invalid, explain the constraint and ask again.
Based on the examples gathered in Step 1, recommend a level before asking:
Present the recommendation with rationale, then let the user confirm or override:
L0 — Pure Prompt: Only a SKILL.md file. All capabilities come from Claude's built-in tools, MCP servers, or general knowledge. Best for workflow guides, domain knowledge, configuration wizards. No scripts needed.
L0+ — Prompt + Helper Scripts: SKILL.md plus lightweight helper scripts for environment detection, status caching, or other auxiliary tasks. Core logic stays in the prompt. Best when Claude needs a preflight check or a small utility but handles business logic itself.
L1 — Prompt + Business Scripts: SKILL.md orchestrates CLI scripts that handle core business logic. Scripts accept parameters, return structured JSON, and follow MCP tool design principles. Best for skills that interact with APIs, process data, or perform operations that benefit from deterministic code.
If the user chose L1, ask which environment strategy to use:
stdlib — Python standard library only. Zero dependencies, zero environment issues. Choose this when urllib, json, argparse, pathlib are sufficient. This is the recommended default.
uv — Dependencies declared inline via PEP 723, executed with uv run. No persistent venv, global cache, version-isolated. Choose this when external packages are needed but a full venv is overkill.
venv — Traditional per-skill virtual environment with run.sh wrapper. Choose this only when dependencies require C extensions, or the skill runs long-lived processes.
Ask where to generate the skill. Default: current working directory. The script creates skills/<name>/ under this directory.
After collecting all parameters, run:
python3 {SKILL_DIR}/scripts/scaffold.py scaffold \
--name <name> \
--level <level> \
[--env <strategy>] \
--output <dir>
Where {SKILL_DIR} is the directory containing this SKILL.md file. Resolve it at runtime.
The script outputs JSON to stdout:
{
"status": "ok",
"level": "l1",
"env": "uv",
"created": ["skills/my-skill/SKILL.md", "skills/my-skill/scripts/main.py", ...],
"hint": "..."
}
If it fails, stderr contains JSON with error, hint, and recoverable fields.
Immediately after scaffolding, run the built-in structural validator:
python3 {SKILL_DIR}/scripts/scaffold.py validate --path <generated-skill-dir>
This checks:
name and description fields{{PLACEHOLDER}} tokens remainReport any warnings in the completion output. This catches structural issues before the user invests time editing.
After successful generation, present:
[Skill Creator] Complete!
Skill: <name> (Level: <level>[, Env: <env>])
Output: <directory>
Files created:
• <list from JSON "created" field>
Next Steps:
→ Edit SKILL.md — replace TODO markers, write description with trigger phrases
→ Customize scripts/ (L0+/L1)
→ Test preflight (L0+/L1)
→ Publish with skill-publish when ready
Then provide detailed guidance:
Edit SKILL.md — Replace all placeholder markers. The description field in frontmatter is critical — it determines when Claude activates the skill. Be specific and include trigger phrases.
Customize scripts/ (L0+/L1) — The generated scripts are functional frameworks with placeholder markers. Add your business logic.
Test preflight (L0+/L1) — Run the preflight command to verify the JSON output structure works:
bash scripts/helper.sh preflightpython3 scripts/main.py preflightuv run scripts/main.py preflightbash scripts/run.sh preflight (after setup)Add references/ — Put detailed reference documents here and reference them from SKILL.md with file read instructions. Keep SKILL.md lean (under 200 lines — the context window is shared with conversation history and other skills).
Use assets/ for non-context files (L0+/L1) — If your skill produces documents, templates, or uses images/fonts in output, put them in assets/. Unlike references/ (which are loaded into context), assets/ files are only used by scripts and never consume context window budget.
Do NOT add auxiliary docs — Do not create README.md, CHANGELOG.md, INSTALLATION_GUIDE.md, or other documentation files inside the skill directory. The skill should only contain files needed by the AI agent. READMEs belong at the repo level (handled by better-skill-publish).
Ready to publish? — If the better-skill-publish skill is installed, use it to wrap this into a complete GitHub repo with README, LICENSE, plugin.json, and marketplace.json.
If not installed: npx skills add psylch/better-skills@better-skill-publish -g -y
The context window is a shared resource. Every line of SKILL.md competes with conversation history, other skills, and tool results. Keep SKILL.md under 200 lines. Move detailed documentation to references/.
Match instruction specificity to task fragility:
For skill design conventions — output formats, error handling, environment strategies, preflight conventions — read references/best_practices.md.
For common quality issues and how to avoid them, read references/improvement_patterns.md.
For what automated validation checks will be run (by better-skill-review), read references/validation_rules.md.