Use when executing implementation plans with independent tasks in the current session
From clavainnpx claudepluginhub mistakeknot/interagency-marketplace --plugin clavainThis skill uses the workspace's default tool permissions.
SKILL-compact.mdcode-quality-reviewer-prompt.mdimplementer-prompt.mdspec-reviewer-prompt.mdSearches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Enables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
Execute plan by dispatching a fresh subagent per task with two-stage review after each: spec compliance first, then code quality.
Core principle: Fresh subagent per task + two-stage review (spec → quality) = high quality, fast iteration.
Have implementation plan? No → brainstorm/manual first. Yes → tasks mostly independent? No (tightly coupled) → manual. Yes → stay in this session? No → executing-plans. Yes → this skill.
vs. executing-plans: Same session (no context switch), fresh subagent per task (no context pollution), two-stage review per task, faster iteration (no human-in-loop between tasks).
./implementer-prompt.md) with full task text + scene-setting context
b. If subagent asks questions → answer completely, then re-dispatch
c. Implementer implements, tests, commits, self-reviews
d. Dispatch spec reviewer (./spec-reviewer-prompt.md) — confirms code matches spec (no over/under-building)
e. If spec issues → implementer fixes → spec reviewer re-reviews (repeat until ✅)
f. Dispatch code quality reviewer (./code-quality-reviewer-prompt.md) — get git SHAs first
g. If quality issues → implementer fixes → quality reviewer re-reviews (repeat until ✅)
h. Mark task complete in TodoWrite./implementer-prompt.md — implementer subagent./spec-reviewer-prompt.md — spec compliance reviewer./code-quality-reviewer-prompt.md — code quality reviewerTask 1: Hook installation script
[Dispatch implementer with full task text + context]
Implementer: "Should hook be installed at user or system level?"
You: "User level (~/.claude/hooks/)"
Implementer: implemented install-hook, 5/5 tests, added --force flag, committed
[Dispatch spec reviewer]
Spec reviewer: ✅ All requirements met, nothing extra
[Get git SHAs, dispatch code quality reviewer]
Code reviewer: ✅ Approved
[Mark Task 1 complete]
Task 2: Recovery modes
[Dispatch implementer]
Implementer: added verify/repair modes, 8/8 tests, committed
[Dispatch spec reviewer]
Spec reviewer: ❌ Missing progress reporting (spec: "every 100 items"); extra: --json flag not requested
[Implementer fixes: removed --json, added progress reporting]
Spec reviewer: ✅ Compliant
[Dispatch code quality reviewer]
Code reviewer: ⚠️ Magic number (100) — extract constant
[Implementer: extracted PROGRESS_INTERVAL]
Code reviewer: ✅ Approved
[Mark Task 2 complete]
[After all tasks: dispatch final code reviewer]
Final reviewer: All requirements met, ready to merge
Quality gates: self-review → spec compliance (prevents over/under-building) → code quality → final review. Issues caught before handoff.
Efficiency: Controller extracts all tasks upfront; subagent gets complete info (no plan file reading); questions surfaced before work begins.
vs. manual: Fresh context per task, no context pollution, subagent follows TDD naturally.
Cost note: More invocations (implementer + 2 reviewers per task + review loops), but catches issues early.
If subagent fails task: dispatch fix subagent with specific instructions — don't fix manually (context pollution).