From rkstack
Execute implementation plans task-by-task with review checkpoints. Use when you have a written implementation plan to execute in a separate session. Batch execution with checkpoints for review.
npx claudepluginhub mrkhachaturov/ccode-personal-plugins --plugin rkstackThis skill is limited to using the following tools:
<!-- AUTO-GENERATED from SKILL.md.tmpl — do not edit directly -->
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Guides TDD-style skill creation: pressure scenarios as tests, baseline agent failures, write docs to enforce compliance, verify with RED-GREEN-REFACTOR.
# === RKstack Preamble (executing-plans) ===
# Read detection cache (written by session-start via rkstack detect)
if [ -f .rkstack/settings.json ]; then
cat .rkstack/settings.json
else
echo "WARNING: .rkstack/settings.json not found — detection cache missing"
fi
# Session-volatile checks (can change mid-session)
_BRANCH=$(git branch --show-current 2>/dev/null || echo "unknown")
_HAS_CLAUDE_MD=$([ -f CLAUDE.md ] && echo "yes" || echo "no")
echo "BRANCH: $_BRANCH"
echo "CLAUDE_MD: $_HAS_CLAUDE_MD"
Use the detection cache and preamble output to adapt your behavior:
detection.flowType (web or default). If web: check React/Vue/Svelte patterns, responsive design, component architecture. If default: CLI tools, MCP servers, backend scripts.just commands instead of raw shell.detection.stack for what's in the project and detection.stats for scale (files, code, complexity).detection.repoMode for solo vs collaborative.detection.services for Supabase and other service integrations.ALWAYS follow this structure for every AskUserQuestion call:
_BRANCH value from preamble — NOT any branch from conversation history or gitStatus), and the current plan/task. (1-2 sentences)RECOMMENDATION: Choose [X] because [one-line reason] — always prefer the complete option over shortcuts (see Completeness Principle). Include Completeness: X/10 for each option. Calibration: 10 = complete implementation (all edge cases, full coverage), 7 = covers happy path but skips some edges, 3 = shortcut that defers significant work.A) ... B) ... C) ... — when an option involves effort, show both scales: (human: ~X / CC: ~Y)Assume the user hasn't looked at this window in 20 minutes and doesn't have the code open. If you'd need to read the source to understand your own explanation, it's too complex.
AI makes completeness near-free. Always recommend the complete option over shortcuts — the delta is minutes with AI. A "lake" (100% coverage, all edge cases) is boilable; an "ocean" (full rewrite, multi-quarter migration) is not. Boil lakes, flag oceans.
Effort reference — always show both scales:
| Task type | Human team | CC + AI | Compression |
|---|---|---|---|
| Boilerplate | 2 days | 15 min | ~100x |
| Tests | 1 day | 15 min | ~50x |
| Feature | 1 week | 30 min | ~30x |
| Bug fix | 4 hours | 15 min | ~20x |
Include Completeness: X/10 for each option (10=all edge cases, 7=happy path, 3=shortcut).
When completing a skill workflow, report status using one of:
It is always OK to stop and say "this is too hard for me" or "I'm not confident in this result."
Bad work is worse than no work. You will not be penalized for escalating.
Escalation format:
STATUS: BLOCKED | NEEDS_CONTEXT
REASON: [1-2 sentences]
ATTEMPTED: [what you tried]
RECOMMENDATION: [what the user should do next]
Execute implementation plans task-by-task in the current session. This is the inline execution alternative to subagent-driven-development — you stay in the same session, executing each task sequentially with periodic checkpoint reviews.
Announce at start: "I'm using the executing-plans skill to implement this plan."
When to use this skill:
When to prefer subagent-driven-development instead:
### Task N: headers with - [ ] checkbox stepspendingAnnounce: "Plan loaded: N tasks, M total steps. [Any concerns or ready to start.]"
For each task in order:
in_progress in TodoWriteAfter all steps in a task pass verification:
# Stage only the files this task touched
git add <specific files from this task>
git commit -m "<commit message from plan, or descriptive message>"
completed in TodoWriteAfter every 3 completed tasks (or at natural boundaries like "all infrastructure tasks done"), pause execution and run a checkpoint:
Summarize progress:
Run the full test suite:
# Use the test command from CLAUDE.md or the plan header
Report: all passing, or list failures.
Check for drift:
Ask the user: Use AskUserQuestion with the standard format:
Re-ground: Project X, branch Y. Executing plan Z — completed N of M tasks.
Status: [summary of what was built and test results]
RECOMMENDATION: Choose A to continue — on track, tests passing. A) Continue execution — next batch is Tasks N+1 through N+3 B) Adjust plan — something needs changing before continuing C) Stop here — commit what we have and pause
When a step fails (test doesn't pass, build breaks, command errors):
Attempt 1: Re-read the plan step. Check for typos, wrong file paths, import issues. Fix the obvious problem and retry.
Attempt 2: Look at the error more carefully. Check surrounding code for context. Try a different approach that still satisfies the plan's intent.
Attempt 3: Broader investigation — check if an earlier task left something in a broken state, check if the plan's assumption was wrong.
After 3 failed attempts:
blocked in TodoWriteEscalation format:
STATUS: BLOCKED
TASK: Task N — [name]
ATTEMPTED:
1. [what you tried first]
2. [what you tried second]
3. [what you tried third]
ERROR: [the actual error message]
RECOMMENDATION: [what the user should investigate or decide]
Never silently skip a failed step. Every failure must be reported.
After all tasks are executed (or all non-blocked tasks are done):
Run full verification:
# Full test suite
# Build (if applicable)
# Lint (if applicable)
Use commands from CLAUDE.md or the plan header. Do not guess commands.
Verify with evidence: Apply the verification-before-completion discipline — every claim must be backed by a command you ran and output you read in this session. No completion claims without fresh evidence.
Report final status:
Suggest next step:
"Plan execution complete. Status: [DONE/DONE_WITH_CONCERNS/BLOCKED].
Suggested next steps:
- Use requesting-code-review for a thorough review of the changes
- Use finishing-a-development-branch to prepare for merge"
If flowType is web (from detection cache):
After implementing a UI task, before marking it done, run visual verification:
skills/browse/dev-server-discovery.md)$RKSTACK_BROWSE goto <url-of-affected-page>$RKSTACK_BROWSE snapshot -i -a — verify interactive elements render correctly$RKSTACK_BROWSE console — check for JavaScript errors. Console errors block task completion, same as failing tests.$RKSTACK_BROWSE network --failed — check for failed network requestsIf the plan step includes a design reference, compare the implementation screenshot against it. Note any discrepancies.
If the dev server is not running, prompt: "Start the dev server to verify visually. The command should be in CLAUDE.md. Without it, I can only verify via tests."
If HAS_SUPABASE=yes and the task involves data mutations, verify via Supabase MCP that the data was written correctly. This catches cases where the UI shows success but the backend silently failed.
If flowType is not web, skip this section entirely.