From claude-claw
Generates OpenClaw SKILL.md files for skills or slash commands. Interviews for new skill requirements or ports existing Claude Code skills with frontmatter translation and tool mapping.
npx claudepluginhub gupsammy/claudest --plugin claude-clawThis skill uses the workspace's default tool permissions.
Generate well-structured OpenClaw skills or slash commands. Both are SKILL.md files with YAML frontmatter — they share the same structure but differ in how they're triggered and described. OpenClaw uses the AgentSkills spec (pi-coding-agent) with its own frontmatter fields, tool names, and path conventions distinct from Claude Code.
Creates, designs, and packages reusable Clawdbot skills from scratch or by extracting ad-hoc functionality like scripts or instructions. Handles Clawdbot integrations such as tool calling, memory, and message routing.
Creates new Claude Code skills from scratch following best practices for structure, naming, frontmatter, progressive disclosure, reference organization, and tool scoping. Use for building skills or converting slash commands.
Guides creation of new Claude Code skills from scratch, listing existing user/project skills, samples, current directory, and providing metadata templates for descriptions/tags.
Share bugs, ideas, or general feedback.
Generate well-structured OpenClaw skills or slash commands. Both are SKILL.md files with YAML frontmatter — they share the same structure but differ in how they're triggered and described. OpenClaw uses the AgentSkills spec (pi-coding-agent) with its own frontmatter fields, tool names, and path conventions distinct from Claude Code.
Before generating, retrieve the latest OpenClaw skill documentation:
clawdocs get "tools/skills" --no-header -q
Capture any frontmatter fields or options not already listed in {baseDir}/references/frontmatter-options.md. If clawdocs is unavailable, proceed with current references — they are sufficient. If new fields appear, use them and note the update.
Parse $ARGUMENTS for type hint. Users are often unclear on OpenClaw-specific conventions. Interview to gather:
metadata.openclaw.requires.*)sessions_spawn? Command dispatch (bypass model)?Proceed to Phase 2 when at minimum Objective and Trigger Scenarios are established.
When the user provides an existing Claude Code skill to port (file path or pasted content), skip the interview and apply this translation sequence:
model, context, agent, allowed-tools, hooks, license). Add metadata with requires if scripts need specific binaries. See {baseDir}/references/frontmatter-options.md for valid fields.{baseDir}/references/claw-patterns.md: Bash→exec, Read→read, Write→write, Edit→edit, Glob/Grep→exec+find/rg, WebSearch→web_search, WebFetch→web_fetch, Task→sessions_spawn, AskUserQuestion→conversational asking$CLAUDE_PLUGIN_ROOT with {baseDir}. Remove @file injection and bang-backtick references.Apply throughout generation: use imperative voice and terse phrasing because every token in a generated skill body costs budget on every invocation. Prefer instruction over example — state the rule with its reasoning so it generalizes to every input.
Initialize directory first (when creating a new skill directory, not editing an existing file):
python3 {baseDir}/scripts/init_claw_skill.py <name> --path <dir> [--resources scripts,references,assets]
Exit 0 = directory scaffolded, proceed to Step 1. Exit 1 = naming collision; ask user whether to overwrite or rename.
/ menucommand-dispatch: tool with command-tool — bypasses model entirely, routes directly to a named tool (rare; for pure pass-through cases)Read {baseDir}/references/frontmatter-options.md for the full OpenClaw field catalog, description patterns, and the metadata single-line JSON constraint.
Key constraint: metadata must be a single-line JSON object on one line. Multi-line YAML mappings under metadata are not valid in OpenClaw.
Description density rules: Keep descriptions under ~400 characters / ~100 tokens (600 chars / 150 tokens absolute max) — they load every session. Per the OpenClaw cost formula, each skill costs 195 + 97 + field lengths characters in the system prompt; a 10-skill install with verbose descriptions burns significant context on routing metadata alone. Derive trigger phrases from the user's actual words in Phase 1, not paraphrases. See the token budget and trigger derivation principles in {baseDir}/references/frontmatter-options.md.
Intensional over extensional — state the rule with its reasoning rather than listing examples that imply the rule. An intensional rule generalizes to every input the skill will encounter; an extensional list only covers the shapes shown.
Before writing the body, verify the description will route correctly. Mentally generate:
Evaluate: does the description cover all should-trigger prompts? Would it plausibly reject the should-NOT-trigger prompts? If coverage is weak, revise the description — add missing trigger phrases, tighten language to exclude adjacent domains, or add a negative trigger ("Not for X").
This step catches routing misses before the rest of the skill is built. Proceed when description coverage is adequate.
Construction rules:
references/, not SKILL.md{baseDir} is the path variable for skill-relative file references (substituted before model sees the skill body)Both skills and commands follow the same body pattern:
# Name
Brief overview (1-2 sentences).
## Process
1. Step one (imperative voice)
2. Step two
3. Step three
Dynamic Content:
| Syntax | Purpose |
|---|---|
$ARGUMENTS | All arguments as string |
$1, $2, $3 | Positional arguments |
{baseDir} | Absolute path to skill directory (substituted at load time) |
Note: @file injection and bang-backtick command expansion are Claude Code features specific to Claude Code's skill loader implementation — the pi-coding-agent skill loader only supports {baseDir} path substitution and does not implement these extensions. Do not use them in generated OpenClaw skills.
Read {baseDir}/references/script-patterns.md and apply the five signal patterns to every workflow step in the skill being generated:
| Signal | Question | If yes → |
|---|---|---|
| Repeated Generation | Does any step produce the same structure with different params across invocations? | Parameterized script in scripts/ |
| Unclear Tool Choice | Does any step combine multiple operations in a fragile sequence naturally expressible as one function? | Script the procedure |
| Rigid Contract | Can you write --help text for this step right now without ambiguity? | CLI candidate |
| Dual-Use Potential | Would a user want to run this step from the terminal, outside the skill workflow? | Design as proper CLI from the start |
| Consistency Critical | Must this step produce bit-for-bit identical output for identical inputs? | Script — never LLM generation |
For each identified script candidate:
{baseDir}/references/script-patterns.md (init/validate/transform/package/query)scripts/ using the Python template from {baseDir}/references/script-patterns.mdexec tool, output interpretationWiring rule: A script reference must state when to invoke (trigger condition), how to invoke (exact command with flags), and what to do with the result (exit code handling, which output fields matter).
Scripts are invoked via the exec tool (not Bash). Reference paths using {baseDir}/scripts/script.py.
Read {baseDir}/references/claw-patterns.md for delegation patterns, sessions_spawn usage, cross-skill reference conventions, and tool group translations (Claude Code → OpenClaw tool name mapping).
Scan for existing resources before finalizing:
Review available OpenClaw skills (check ~/.openclaw/skills/ and workspace/skills/)
For each workflow step, ask: "Do we already have this?"
Common delegation patterns:
{baseDir}/../<other-skill>/SKILL.md via the read tool, or instruct the user to type /<other-skill-name>sessions_spawn (non-blocking; result announced back to chat)exec: clawdocs get "<slug>" --no-header -qThere is no Skill tool in OpenClaw — skills are invoked by the routing model, not programmatically from within another skill.
When generating a new skill directory (not editing an existing single file):
python3 {baseDir}/scripts/validate_claw_skill.py <skill-directory> --output json
Exit 0 = proceed to Phase 3. Exit 1 = parse the errors array; each entry has field, message, severity. Resolve all critical and major items before writing to disk.
When presenting the generated skill/command to the user, briefly explain:
metadata.openclaw.requires.bins: [jq] because the skill calls jq in a subprocess"user-invocable at default (true), command-dispatch omitted (skill routes through model)"| Type | Location | When active |
|---|---|---|
| Workspace skill | <workspace>/skills/<name>/ | Next session in that workspace |
| Managed skill | ~/.openclaw/skills/<name>/ | Shared across all agents on this machine |
Skills are session-snapshotted — changes take effect on the next new session, not the current one.
Before writing:
Writing to: [path]
This will [create new / overwrite existing] file.
Proceed?
Summarize what was created:
Invoke only when the user explicitly requests distribution:
clawhub publish <skill-directory> --slug <slug> --version X.Y.Z --tags latest
Exit 0 = published. Exit 1 = validation or auth failure; read stdout for details.
Score the generated skill/command:
| Dimension | Criteria |
|---|---|
| Clarity (0-10) | Instructions unambiguous, objective clear |
| Precision (0-10) | Appropriate specificity without over-constraint |
| Efficiency (0-10) | Token economy — maximum value per token |
| Completeness (0-10) | Covers requirements without gaps or excess |
| Usability (0-10) | Practical, actionable, appropriate for target use |
Target: 9.0/10.0. If below, refine once addressing the weakest dimension, then deliver.
Re-run validate_claw_skill.py after any revisions and verify the validation checklist below before finalizing.
| Level | When to Use | Format |
|---|---|---|
| High freedom | Multiple valid approaches, context-dependent decisions | Text instructions, heuristics |
| Medium freedom | Preferred pattern exists, some variation acceptable | Pseudocode, scripts with parameters |
| Low freedom | Fragile operations, consistency critical, specific sequence required | Exact scripts, few parameters |
Format Economy:
Remove ruthlessly: Filler phrases, obvious implications, redundant framing, excessive politeness
Before finalizing an OpenClaw skill or command:
Structure:
name and description fieldsmetadata field (if present) is single-line JSON on one lineDescription Quality:
OpenClaw Correctness:
model, context, agent, allowed-tools, hooks, licenseBash, WebSearch, WebFetch, Read, Write, Edit, Glob, Grep, Task, Skill, AskUserQuestion, EnterPlanMode, ExitPlanModeexec, read, write, edit, web_search, web_fetch, sessions_spawn{baseDir} for skill-relative paths (not $CLAUDE_PLUGIN_ROOT)<workspace>/skills/) or managed (~/.openclaw/skills/)Content Quality:
references/exec), output handlingProgressive Disclosure:
references/scripts/examples/ present if skill produces user-adaptable output (see {baseDir}/examples/sample-command/SKILL.md for a minimal command example)| Issue | Action |
|---|---|
| Unclear requirements | Ask clarifying questions before generating |
| Missing context | Request usage examples or target scenarios from user |
| Path issues | Verify target directory exists; let init_claw_skill.py create it |
| Type unclear | Default to skill (auto-triggered) if user hasn't specified |
clawdocs unavailable | Proceed with current references — they are sufficient |
Execute phases sequentially. Always fetch current documentation first.