From rkstack
Create new skills for your project. Use when creating a skill from scratch, editing existing skills, or testing skill behavior. TDD applied to process documentation: write failing test, write skill, close loopholes.
npx claudepluginhub mrkhachaturov/ccode-personal-plugins --plugin rkstackThis skill is limited to using the following tools:
<!-- AUTO-GENERATED from SKILL.md.tmpl — do not edit directly -->
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Verifies tests pass on completed feature branch, presents options to merge locally, create GitHub PR, keep as-is or discard; executes choice and cleans up worktree.
# === RKstack Preamble (writing-skills) ===
# Read detection cache (written by session-start via rkstack detect)
if [ -f .rkstack/settings.json ]; then
cat .rkstack/settings.json
else
echo "WARNING: .rkstack/settings.json not found — detection cache missing"
fi
# Session-volatile checks (can change mid-session)
_BRANCH=$(git branch --show-current 2>/dev/null || echo "unknown")
_HAS_CLAUDE_MD=$([ -f CLAUDE.md ] && echo "yes" || echo "no")
echo "BRANCH: $_BRANCH"
echo "CLAUDE_MD: $_HAS_CLAUDE_MD"
Use the detection cache and preamble output to adapt your behavior:
detection.flowType (web or default). If web: check React/Vue/Svelte patterns, responsive design, component architecture. If default: CLI tools, MCP servers, backend scripts.just commands instead of raw shell.detection.stack for what's in the project and detection.stats for scale (files, code, complexity).detection.repoMode for solo vs collaborative.detection.services for Supabase and other service integrations.ALWAYS follow this structure for every AskUserQuestion call:
_BRANCH value from preamble — NOT any branch from conversation history or gitStatus), and the current plan/task. (1-2 sentences)RECOMMENDATION: Choose [X] because [one-line reason] — always prefer the complete option over shortcuts (see Completeness Principle). Include Completeness: X/10 for each option. Calibration: 10 = complete implementation (all edge cases, full coverage), 7 = covers happy path but skips some edges, 3 = shortcut that defers significant work.A) ... B) ... C) ... — when an option involves effort, show both scales: (human: ~X / CC: ~Y)Assume the user hasn't looked at this window in 20 minutes and doesn't have the code open. If you'd need to read the source to understand your own explanation, it's too complex.
AI makes completeness near-free. Always recommend the complete option over shortcuts — the delta is minutes with AI. A "lake" (100% coverage, all edge cases) is boilable; an "ocean" (full rewrite, multi-quarter migration) is not. Boil lakes, flag oceans.
Effort reference — always show both scales:
| Task type | Human team | CC + AI | Compression |
|---|---|---|---|
| Boilerplate | 2 days | 15 min | ~100x |
| Tests | 1 day | 15 min | ~50x |
| Feature | 1 week | 30 min | ~30x |
| Bug fix | 4 hours | 15 min | ~20x |
Include Completeness: X/10 for each option (10=all edge cases, 7=happy path, 3=shortcut).
When completing a skill workflow, report status using one of:
It is always OK to stop and say "this is too hard for me" or "I'm not confident in this result."
Bad work is worse than no work. You will not be penalized for escalating.
Escalation format:
STATUS: BLOCKED | NEEDS_CONTEXT
REASON: [1-2 sentences]
ATTEMPTED: [what you tried]
RECOMMENDATION: [what the user should do next]
This skill teaches how to create skills for YOUR project. It combines:
Announce at start: "I'm using the writing-skills skill to create/edit a skill."
A skill is a SKILL.md file with instructions that Claude follows when relevant. Skills live in directories with optional supporting files:
my-skill/
SKILL.md # Main instructions (required)
reference.md # Detailed docs (loaded when needed)
scripts/
helper.sh # Scripts Claude can execute
| Location | Path | Applies to |
|---|---|---|
| Personal | ~/.claude/skills/<name>/SKILL.md | All your projects |
| Project | .claude/skills/<name>/SKILL.md | This project only |
| Plugin | <plugin>/skills/<name>/SKILL.md | Where plugin is enabled |
Project skills are shared with the team (committed to git). Personal skills are yours only.
Full discovery rules and monorepo support: Read
refs/skills.md— section "Where skills live".
| Type | Examples | What It Contains |
|---|---|---|
| Technique | TDD, debugging | Concrete method with steps |
| Pattern | brainstorming, planning | Way of thinking about problems |
| Reference | API docs, tool guides | Syntax, commands, configuration |
Create when: technique wasn't obvious, you'd reference it again, pattern applies broadly, others would benefit.
Don't create for: one-off solutions, standard practices, project-specific conventions (put in CLAUDE.md instead), constraints enforceable with automation (use hooks instead).
Every skill starts with YAML frontmatter between --- markers, followed by markdown instructions.
---
name: my-skill
description: Use when [triggering conditions]. Use when [symptoms].
allowed-tools: Bash, Read, Edit
---
Key fields:
name — letters, numbers, hyphens (max 64 chars). Becomes the /slash-command.description — tells Claude WHEN to use the skill. Start with "Use when..." Describe triggering conditions, NOT the workflow.allowed-tools — tools Claude can use without permission prompts when this skill is active.Additional fields for controlling behavior:
disable-model-invocation: true — user-only invocation (prevents Claude auto-triggering). Use for skills with side effects like deploy, commit, send-message.user-invocable: false — hide from / menu. Use for background knowledge Claude should load automatically but users shouldn't invoke directly.context: fork — run in an isolated subagent. The skill content becomes the prompt. Use for self-contained research or analysis tasks.agent — which subagent to use with context: fork (Explore, Plan, or custom agent name).argument-hint — autocomplete hint shown in / menu (e.g., [issue-number]).paths — glob patterns to auto-activate only when working with matching files.Complete field reference with defaults: Read
refs/skills.md— section "Frontmatter reference".
Skills support runtime substitution:
$ARGUMENTS / $0, $1 — arguments passed when invoking. /my-skill hello → $ARGUMENTS = hello.${CLAUDE_SKILL_DIR} — path to the skill's directory. Use in hook commands.!`command` — shell preprocessing. Runs BEFORE Claude sees the skill. Output replaces the placeholder.---
name: pr-review
context: fork
agent: Explore
---
## PR Context
- Diff: !`gh pr diff`
- Comments: !`gh pr view --comments`
Summarize this pull request...
Full substitution reference: Read
refs/skills.md— section "Available string substitutions" and "Inject dynamic context".
Keep SKILL.md under 500 lines. Move detailed reference to supporting files and reference them from SKILL.md:
For complete API details, see [reference.md](reference.md)
For usage examples, see [examples.md](examples.md)
Claude loads supporting files on demand — they don't cost context until needed.
File organization patterns: Read
refs/skills.md— section "Add supporting files".
Skills can declare hooks that fire during the skill's lifecycle. Hooks are pre/post-execution checks — they enforce rules programmatically, not through instructions.
---
name: safe-editor
hooks:
PreToolUse:
- matcher: "Edit"
hooks:
- type: command
command: "bash ${CLAUDE_SKILL_DIR}/scripts/check-allowed.sh"
statusMessage: "Checking edit permissions..."
---
Hook scripts read JSON from stdin, return JSON to stdout:
{} — allow{"permissionDecision": "ask", "message": "..."} — warn (user overrides){"permissionDecision": "deny", "message": "..."} — blockRule of thumb: If something MUST happen → use a hook. If it's guidance → put it in skill content.
Full hook event list and schemas: Read
refs/hooks.md. Practical hook patterns: Readrefs/hooks-guide.md.
Skills can delegate work to subagents — isolated contexts with their own tools and instructions. Create agents as markdown files in .claude/agents/:
---
name: code-reviewer
description: Review code for quality issues
model: sonnet
tools: Read, Grep, Glob
---
You are a code reviewer. Analyze the code and report issues...
Built-in agents: Explore (read-only research), Plan (analysis without changes), general-purpose (full tools).
Agent configuration and patterns: Read
refs/sub-agents.md.
Skills work alongside CLAUDE.md — the persistent instruction system. Don't duplicate what's in CLAUDE.md. Instead, read it:
If your skill needs project config that's not in CLAUDE.md, ask the user with AskUserQuestion and suggest they persist it to CLAUDE.md.
CLAUDE.md hierarchy and import syntax: Read
refs/memory.md.
Skills grant tool access via allowed-tools. This works within the user's permission system — deny rules still override. Permission precedence: deny > ask > allow.
Permission rules and tool-specific syntax: Read
refs/permissions.md.
Claude reads descriptions to decide which skills to load. Optimize for discovery.
Description = WHEN to use, NOT WHAT it does. Testing showed that workflow summaries in descriptions cause Claude to skip the full skill body — it treats the description as a shortcut.
# BAD: workflow summary — Claude shortcuts the full skill
description: Dispatches subagent per task with code review between tasks
# GOOD: triggering conditions only
description: Use when executing implementation plans with independent tasks
Keywords: Use error messages, symptoms, synonyms, tool names.
Token budget: Skill descriptions share ~16K chars of context. Many skills = some get excluded. Keep descriptions concise.
Iron Law: No skill without a failing test first.
Same discipline as TDD for code. Applies to new skills AND edits.
Full testing methodology: See testing-skills-with-subagents.md — pressure types, plugging holes, meta-testing.
Skills that enforce rules (like TDD, verification) need to resist rationalization. Agents are smart — they find loopholes under pressure.
Close every loophole explicitly: Don't just state the rule — forbid specific workarounds.
Build a rationalization table from baseline testing — every excuse goes in:
| Excuse | Reality |
|---|---|
| "Too simple to test" | Simple code breaks. Test takes 30 seconds. |
| "I'll test after" | Tests passing immediately prove nothing. |
Create a red flags list for self-checking:
Persuasion principles: See persuasion-principles.md — research-backed techniques for discipline enforcement.
RED Phase:
GREEN Phase:
REFACTOR Phase:
Best practices:
Read
refs/best-practices.mdfor context management, verification patterns, and workflow structure.