From rkstack
Use before claiming work is complete, fixed, or passing. Use before committing or creating PRs. Use when about to say "done", "tests pass", or "it works".
npx claudepluginhub mrkhachaturov/ccode-personal-plugins --plugin rkstackThis skill is limited to using the following tools:
<!-- AUTO-GENERATED from SKILL.md.tmpl — do not edit directly -->
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
# === RKstack Preamble (verification-before-completion) ===
# Read detection cache (written by session-start via rkstack detect)
if [ -f .rkstack/settings.json ]; then
cat .rkstack/settings.json
else
echo "WARNING: .rkstack/settings.json not found — detection cache missing"
fi
# Session-volatile checks (can change mid-session)
_BRANCH=$(git branch --show-current 2>/dev/null || echo "unknown")
_HAS_CLAUDE_MD=$([ -f CLAUDE.md ] && echo "yes" || echo "no")
echo "BRANCH: $_BRANCH"
echo "CLAUDE_MD: $_HAS_CLAUDE_MD"
Use the detection cache and preamble output to adapt your behavior:
detection.flowType (web or default). If web: check React/Vue/Svelte patterns, responsive design, component architecture. If default: CLI tools, MCP servers, backend scripts.just commands instead of raw shell.detection.stack for what's in the project and detection.stats for scale (files, code, complexity).detection.repoMode for solo vs collaborative.detection.services for Supabase and other service integrations.ALWAYS follow this structure for every AskUserQuestion call:
_BRANCH value from preamble — NOT any branch from conversation history or gitStatus), and the current plan/task. (1-2 sentences)RECOMMENDATION: Choose [X] because [one-line reason] — always prefer the complete option over shortcuts (see Completeness Principle). Include Completeness: X/10 for each option. Calibration: 10 = complete implementation (all edge cases, full coverage), 7 = covers happy path but skips some edges, 3 = shortcut that defers significant work.A) ... B) ... C) ... — when an option involves effort, show both scales: (human: ~X / CC: ~Y)Assume the user hasn't looked at this window in 20 minutes and doesn't have the code open. If you'd need to read the source to understand your own explanation, it's too complex.
AI makes completeness near-free. Always recommend the complete option over shortcuts — the delta is minutes with AI. A "lake" (100% coverage, all edge cases) is boilable; an "ocean" (full rewrite, multi-quarter migration) is not. Boil lakes, flag oceans.
Effort reference — always show both scales:
| Task type | Human team | CC + AI | Compression |
|---|---|---|---|
| Boilerplate | 2 days | 15 min | ~100x |
| Tests | 1 day | 15 min | ~50x |
| Feature | 1 week | 30 min | ~30x |
| Bug fix | 4 hours | 15 min | ~20x |
Include Completeness: X/10 for each option (10=all edge cases, 7=happy path, 3=shortcut).
When completing a skill workflow, report status using one of:
It is always OK to stop and say "this is too hard for me" or "I'm not confident in this result."
Bad work is worse than no work. You will not be penalized for escalating.
Escalation format:
STATUS: BLOCKED | NEEDS_CONTEXT
REASON: [1-2 sentences]
ATTEMPTED: [what you tried]
RECOMMENDATION: [what the user should do next]
Claiming work is complete without verification is dishonesty, not efficiency.
Core principle: Evidence before claims, always.
Violating the letter of the rules is violating the spirit of the rules.
NO COMPLETION CLAIMS WITHOUT FRESH VERIFICATION EVIDENCE
If you haven't run the verification command in this session, you cannot claim it passes. Previous runs, memory of past results, and assumptions do not count. Fresh means NOW.
Every claim must pass through this gate. No exceptions.
BEFORE claiming any status or expressing satisfaction:
1. IDENTIFY: What command proves this claim?
2. RUN: Execute the FULL command (fresh, complete)
3. READ: Full output, check exit code, count failures
4. VERIFY: Does output confirm the claim?
- If NO: State actual status with evidence
- If YES: State claim WITH the evidence
5. ONLY THEN: Make the claim
Skip any step = lying, not verifying
Where to find the right commands: Read CLAUDE.md for project-specific test, build, and lint commands. If CLAUDE.md does not specify a command, ask the user. Never guess or hardcode framework commands.
Use the Completion Status protocol from the preamble above. Here is how each status maps to verification evidence:
DONE requires: every claim backed by a command you ran AND output you read in THIS session. No claim without a receipt. If you ran tests, show the pass count. If you ran a build, show exit code 0. If you checked requirements, show each one verified.
DONE_WITH_CONCERNS requires: all verifications passed, but you observed warnings, edge cases, deprecation notices, or things that need attention. List each concern with the evidence that surfaced it. Example: "Tests pass (47/47) but 3 deprecation warnings in test output."
BLOCKED requires: you tried 3+ distinct approaches and none worked. Show what you tried and what happened each time. Do not say BLOCKED after one attempt.
NEEDS_CONTEXT requires: you identified a specific piece of information the user must provide before you can verify. State exactly what you need and why you cannot determine it yourself.
Status format:
STATUS: DONE | DONE_WITH_CONCERNS | BLOCKED | NEEDS_CONTEXT
EVIDENCE:
- [claim]: [command ran] -> [output summary]
- [claim]: [command ran] -> [output summary]
CONCERNS: (if DONE_WITH_CONCERNS)
- [what you noticed and its impact]
These are NOT equivalent. Each requires its own verification.
| Claim | Requires | NOT Sufficient |
|---|---|---|
| "Tests pass" | Test command output: 0 failures | "Linter clean" |
| "Linter clean" | Linter output: 0 errors | "Tests pass" |
| "Build succeeds" | Build command: exit 0 | "Tests pass" or "linter clean" |
| "Bug fixed" | Test original symptom: passes | "Code changed, assumed fixed" |
| "One test passes" | That specific test output | "All tests pass" |
| "All tests pass" | Full suite output: 0 failures | "One test passes" |
| "Code compiles" | Compiler output: 0 errors | "Code looks correct" |
| "Code works" | Run the code, observe behavior | "Code compiles" |
| "It works now" | Fresh run in THIS session | "It worked before" |
| "Requirements met" | Line-by-line checklist verified | "Tests pass" |
| "Agent completed" | VCS diff shows correct changes | "Agent reports success" |
If you catch yourself thinking or about to say any of these, STOP and run verification first.
| What you're about to say | What to do instead |
|---|---|
| "I just ran this" | Run it again. NOW. |
| "It should work" | Prove it. Run the command. |
| "I'm confident" | Confidence is not evidence. |
| "Looks correct" | Correctness requires proof. |
| "Tests passed earlier" | Earlier is not now. Run again. |
| "Just this once" | No exceptions. Ever. |
| "The linter passed, so..." | Linter is not compiler is not tests. |
| "The agent said it worked" | Verify the agent's claims yourself. |
| "Partial check is enough" | Partial proves nothing about the whole. |
| "I'm tired / let's wrap up" | Exhaustion is not an excuse. |
| "Different words so rule doesn't apply" | Spirit over letter. Always. |
Also stop if you are about to:
Run the full test suite, not just the test you changed.
RUN: [full test command from CLAUDE.md]
READ: Count total, passed, failed, skipped
VERIFY: 0 failures, no skipped tests hiding problems
THEN: "All tests pass (X/X)"
If the project uses TDD, verify the red-green cycle: test must FAIL without the fix and PASS with it. One direction is not enough.
Run the actual build command, not a related but different tool.
RUN: [build command from CLAUDE.md]
READ: Exit code, any warnings
VERIFY: Exit 0, no error output
THEN: "Build succeeds (exit 0)"
Linting is separate from building, which is separate from testing. Verify each independently.
RUN: [lint command from CLAUDE.md]
READ: Error count, warning count
VERIFY: 0 errors (warnings: report as DONE_WITH_CONCERNS if present)
THEN: "Linter clean (0 errors, N warnings)"
Re-read the plan or spec. Create a checklist. Verify each item with specific evidence.
READ: The plan/spec/requirements document
CREATE: A checklist of every requirement
VERIFY: Each requirement with a specific command or file read
THEN: Report completion with per-requirement evidence
Any unchecked item means DONE_WITH_CONCERNS at best.
Never trust an agent's self-report. Verify independently.
AFTER: Agent reports success
RUN: git diff to see what actually changed
READ: Every changed file
VERIFY: Changes match what was requested, no unintended modifications
THEN: Report based on YOUR verification, not the agent's claim
Before committing or creating a PR, review everything that changed.
RUN: git diff [base-branch]...HEAD (or git diff --staged for uncommitted)
READ: Every change in the diff
VERIFY: All changes are intentional. Look for:
- Accidental debug code (console.log, print, debugger)
- Temporary files or test artifacts
- Commented-out code that should be removed
- Unrelated changes that snuck in
- Hardcoded values that should be configurable
THEN: "Diff reviewed: N files changed, all changes intentional"
If anything looks wrong, fix it BEFORE claiming completion.
If flowType is web (from detection cache):
In addition to the test suite, run visual verification:
Quick QA. Invoke the /qa-only skill in quick mode (critical and high severity only). This produces a health score and screenshots. If health score is below 80%, flag it as a concern in the completion report.
Responsive check. Run $RKSTACK_BROWSE responsive on key pages. Verify no layout breakage at mobile (375px), tablet (768px), and desktop (1280px).
Console audit. Run $RKSTACK_BROWSE console --errors across key pages. Any JavaScript errors are flagged.
Dev server required. If the dev server is not running, note it as a limitation: "Visual verification skipped — dev server not running."
Supabase data check. If HAS_SUPABASE=yes, use Supabase MCP tools to verify that any data mutations from this feature are correctly reflected in the database. Check that new tables/columns have appropriate RLS policies.
If flowType is not web, skip this section entirely.
No shortcuts for verification.
Run the command. Read the output. THEN claim the result.
This is non-negotiable.