Plan and execute feature implementation with TDD and continuous quality checks. Use when asked to "implement this", "build this feature", "execute the plan", or after /arc:ideate has created a design doc. Creates implementation plan if needed, then executes task-by-task with build agents.
Executes TDD implementation plans using specialized build agents for coding, testing, and quality checks.
/plugin marketplace add howells/arc/plugin install arc@howellsThis skill inherits all available tools. When active, it can use any tool Claude has access to.
<tool_restrictions>
EnterPlanMode — BANNED. Do NOT call this tool. This skill has its own structured process — planning (via detail skill) and execution (via build agents). Claude's built-in plan mode would bypass this entire orchestration. Follow the phases below instead.ExitPlanMode — BANNED. You are never in plan mode. There is nothing to exit.If you feel the urge to "plan before acting" — that urge is satisfied by following the <process> phases below. Phase 0 creates the plan via the detail skill. Phases 1-7 execute it. Execute them directly.
</tool_restrictions>
<required_reading> Read these reference files NOW:
<build_agents>
Available build agents in ${CLAUDE_PLUGIN_ROOT}/agents/build/:
| Agent | Model | Use For |
|---|---|---|
implementer | opus | General task execution — utilities, services, APIs, business logic |
fixer | haiku | TypeScript errors, lint issues — fast mechanical fixes |
debugger | sonnet | Failing tests — systematic root cause analysis |
unit-test-writer | sonnet | Unit tests (vitest) — pure functions, components |
integration-test-writer | sonnet | Integration tests (vitest + MSW) — API, auth |
e2e-test-writer | opus | E2E tests (Playwright) — user journeys |
ui-builder | opus | UI components from design spec — anti-slop, memorable |
design-specifier | opus | Design decisions when no spec exists — empty states, visual direction |
figma-builder | opus | Build UI directly from Figma URL |
test-runner | haiku | Run vitest, analyze failures |
e2e-runner | opus | Playwright tests — iterate until green or report blockers |
spec-reviewer | sonnet | Spec compliance check — nothing missing, nothing extra |
code-reviewer | haiku | Quick code quality gate — no any, proper error handling, tests exist |
plan-completion-reviewer | sonnet | Whole-plan gate — all tasks built, nothing skipped, no scope creep |
Before spawning a build agent:
${CLAUDE_PLUGIN_ROOT}/agents/build/[agent-name].mdSpawn syntax:
Task [agent-name] model: [model]: "[task description with context]"
</build_agents>
<rules_context> Check for project coding rules:
Use Glob tool: .ruler/*.md
If .ruler/ exists, detect stack and read relevant rules:
| Check | Read from .ruler/ |
|---|---|
| Always | code-style.md |
next.config.* exists | nextjs.md |
react in package.json | react.md |
tailwindcss in package.json | tailwind.md |
.ts or .tsx files | typescript.md |
vitest or jest in package.json | testing.md |
| Always | error-handling.md |
| Always | security.md |
drizzle or prisma in package.json | database.md |
wrangler.toml exists | cloudflare-workers.md |
ai in package.json | ai-sdk.md |
@clerk/nextjs or @workos-inc/authkit-nextjs in package.json | auth.md |
These rules define MUST/SHOULD/NEVER constraints. Follow them during implementation.
If .ruler/ doesn't exist:
No coding rules found. Run /arc:rules to set up standards, or continue without rules.
Rules are optional — proceed without them if the user prefers.
For UI/frontend work, also load interface rules:
| Check | Read from ${CLAUDE_PLUGIN_ROOT}/rules/interface/ |
|---|---|
| Building components/pages | design.md, colors.md, spacing.md, layout.md |
| Typography changes | typography.md |
| Adding animations | animation.md, performance.md |
| Form work | forms.md, interactions.md |
| Interactive elements | interactions.md |
| Marketing pages | marketing.md |
Additional references (load as needed):
${CLAUDE_PLUGIN_ROOT}/references/component-design.md — React component patterns${CLAUDE_PLUGIN_ROOT}/references/animation-patterns.md — Motion design${CLAUDE_PLUGIN_ROOT}/references/nextjs-app-router.md — Next.js App Router patterns (if using Next.js)${CLAUDE_PLUGIN_ROOT}/references/tanstack-query-trpc.md — TanStack Query + tRPC patterns (if data fetching)${CLAUDE_PLUGIN_ROOT}/references/tanstack-table.md — TanStack Table v8 patterns (if data tables)${CLAUDE_PLUGIN_ROOT}/references/ai-sdk.md — AI SDK 6 patterns (if ai in package.json)
</rules_context>You are here in the arc:
/arc:ideate → Design doc (on main) ✓
↓
/arc:implement → Plan + Execute ← YOU ARE HERE
↓
/arc:review → Review (optional, can run anytime)
Check for existing implementation plan:
ls docs/plans/*-implementation.md 2>/dev/null | tail -1
If plan exists: Skip to Phase 1.
If no plan exists: Follow the detail skill to create one:
Read: ${CLAUDE_PLUGIN_ROOT}/skills/detail/SKILL.md
The detail skill will:
After plan is created, strongly recommend review:
"Implementation plan ready.
I strongly recommend reviewing the plan before building — it's much cheaper to
catch issues now than after writing code.
1. Review first (/arc:review) — recommended
2. Skip review and start implementing"
If review requested → invoke /arc:review, then return here.
If not already in worktree:
# Check current location
git branch --show-current
# If on main/dev, create worktree
git worktree add .worktrees/<feature-name> -b feature/<feature-name>
cd .worktrees/<feature-name>
Install dependencies:
pnpm install # or yarn/npm based on lockfile
Verify test infrastructure exists:
# Check for test runner in package.json
grep -E '"vitest"|"jest"|"playwright"' package.json
If no test runner → stop and ask user. Cannot proceed with TDD without a runner.
Verify clean baseline:
pnpm test # or relevant test command
If tests fail before you start → stop and ask user.
Read implementation plan (created in Phase 0 or pre-existing):
docs/plans/YYYY-MM-DD-<topic>-implementation.md
Create TodoWrite tasks:
One todo per task in the plan. Mark first as in_progress.
Before implementation, identify test needs:
## Test Coverage Plan
### Unit Tests (per task)
| Task | Test File | What to Test |
|------|-----------|--------------|
| Task 1: Create utility | src/utils/x.test.ts | Input/output, edge cases |
| Task 2: Create component | src/components/x.test.tsx | Rendering, props |
### Integration Tests (per feature)
| Feature | Test File | What to Test |
|---------|-----------|--------------|
| Signup form | src/features/auth/signup.integration.test.ts | Form + API + validation |
### E2E Tests (critical flows only)
| Flow | Test File | What to Test |
|------|-----------|--------------|
| User signup → dashboard | tests/signup.spec.ts | Full journey |
Determine auth testing needs:
This plan guides which test agent to spawn for each task.
Default batch size: 3 tasks
Per-task loop:
┌─────────────────────────────────────────────────────────┐
│ 1. CLASSIFY → what type of task? what test level? │
│ 2. TEST → spawn test agent (unit/integration/e2e) │
│ 3. BUILD → implementer / ui-builder / specialized │
│ 4. TDD → run test (fail→impl→pass) │
│ 5. FIX → fixer (TS/lint cleanup) │
│ 6. SPEC → spec-reviewer (matches spec?) │
│ ↳ issues? → fix → re-review │
│ 7. QUALITY → code-reviewer (well-built?) │
│ ↳ issues? → fix → re-review │
│ 8. COMMIT → atomic commit, mark complete │
└─────────────────────────────────────────────────────────┘
For each task:
Update TodoWrite.
Determine which build agent(s) may be needed:
| Task Type | Primary Agent | When to Use |
|---|---|---|
| General implementation | implementer | Utilities, services, APIs, business logic |
| Write unit tests | unit-test-writer | Pure functions, components, hooks |
| Write integration tests | integration-test-writer | API mocking, auth states |
| Write E2E tests | e2e-test-writer | User journeys, Playwright |
| Build UI from spec | ui-builder | UI components with existing design direction |
| Build UI from Figma | figma-builder | Figma URL provided |
| Design decisions needed | design-specifier | No spec exists (empty states, visual direction) |
| Fix TS/lint errors | fixer | Mechanical cleanup |
| Debug failing tests | debugger | Test failures |
| Run E2E tests | e2e-runner | Playwright test suites |
| Verify spec compliance | spec-reviewer | After implementation, before code quality |
Agent selection flow:
Determine test type based on task:
| Task Type | Test Agent | Framework |
|---|---|---|
| Pure function/utility | unit-test-writer | vitest |
| Component with props | unit-test-writer | vitest + testing-library |
| Component + API/state | integration-test-writer | vitest + MSW |
| Auth-related feature | integration-test-writer | vitest + Clerk/WorkOS mocks |
| User flow/journey | e2e-test-writer | Playwright |
Spawn appropriate test writer:
For unit tests:
Task [unit-test-writer] model: sonnet: "Write unit tests for [function/component].
Behavior to test:
- [expected behavior from plan]
- [edge cases]
- [error cases]
File to create: [path/to/module.test.ts]
Follow vitest patterns from testing-patterns.md"
For integration tests (API/auth):
Task [integration-test-writer] model: sonnet: "Write integration tests for [feature].
Behavior to test:
- [component + API interaction]
- [auth states: loading, signed in, signed out]
- [error handling]
Auth: [Clerk/WorkOS/none]
API endpoints to mock: [list]
File to create: [path/to/feature.integration.test.ts]"
For E2E tests (critical flows):
Task [e2e-test-writer] model: opus: "Write E2E tests for [user journey].
Flow to test:
- [step 1]
- [step 2]
- [expected outcome]
Auth setup: [Clerk/WorkOS/none]
File to create: [tests/feature.spec.ts]"
1. Tests written (from Step 3)
2. Run test → verify FAIL
3. Write implementation (copy from plan, adapt as needed)
4. Run test → verify PASS
5. Fix TypeScript + lint (spawn fixer if issues)
6. Commit with message from plan
<continuous_quality> After every implementation, before commit:
TypeScript check:
pnpm tsc --noEmit
Biome lint + format:
pnpm biome check --write .
If issues found — spawn fixer:
Task [fixer] model: haiku: "Fix TypeScript and lint errors.
Files with issues: [list files]
Errors: [paste error output]
Project rules: .ruler/typescript.md, .ruler/code-style.md"
Why continuous:
If test doesn't fail when expected:
If test doesn't pass after implementation — spawn debugger:
Task [debugger] model: sonnet: "Test failing unexpectedly.
Test file: [path]
Test name: [name]
Error: [paste full error]
Implementation file: [path]
Investigate root cause and fix. See ${CLAUDE_PLUGIN_ROOT}/disciplines/systematic-debugging.md"
If debugger can't resolve after one attempt → stop and ask user.
After implementation, spawn spec-reviewer:
Task [spec-reviewer] model: sonnet: "Verify implementation matches spec.
Task spec: [paste task specification]
Files created/modified: [list]
Check: nothing missing, nothing extra."
If spec-reviewer finds issues → fix with implementer/fixer → re-run spec-reviewer. If compliant → proceed to code quality.
After spec compliance passes, spawn code-reviewer:
Task [code-reviewer] model: haiku: "Quick code quality check.
Files: [list of files created/modified]
Check: no any types, error handling, tests exist, style consistent."
If code-reviewer finds issues → fix with fixer → re-run code-reviewer. If approved → commit and mark complete.
git add [files]
git commit -m "feat(scope): [description from plan]"
Update TodoWrite to mark task completed.
After every 3 tasks:
Completed:
- Task 1: [description] ✓
- Task 2: [description] ✓
- Task 3: [description] ✓
Tests passing: [X/X]
Ready for feedback before continuing?
Wait for user confirmation or adjustments.
If the current task is a checkpoint type ([CHECKPOINT:VERIFY], [CHECKPOINT:DECIDE], [CHECKPOINT:ACTION]):
For VERIFY:
For DECIDE:
For ACTION:
vercel whoami)See ${CLAUDE_PLUGIN_ROOT}/references/checkpoint-patterns.md for full protocol.
Before creating new utility functions or services: Spawn duplicate-detector to check for existing similar functionality:
Task [duplicate-detector] model: sonnet: "Scan for functions similar to what I'm about to create.
New function purpose: [what it does]
Search in: [src/utils/, src/lib/, src/helpers/ or relevant dirs]
Report any semantic duplicates so we can reuse instead of reinvent."
If duplicates found → reuse existing code. Skip creating the new function.
After completing data/types tasks:
Before starting UI tasks:
If design spec exists — spawn ui-builder:
Read: ${CLAUDE_PLUGIN_ROOT}/agents/build/ui-builder.md
If no design spec (empty states, undefined visuals) — spawn design-specifier first:
Task [design-specifier] model: opus: "Create design spec for [component].
Context: [what this is for, user's emotional state]
Existing patterns: [what it should feel like]
Project aesthetic: [tone from design doc]
Output actionable spec for ui-builder to implement."
Then spawn ui-builder with the design-specifier's output.
If Figma URL provided — spawn figma-builder:
Read: ${CLAUDE_PLUGIN_ROOT}/agents/build/figma-builder.md
Task [figma-builder] model: opus: "Implement from Figma: [URL]"
**For ui-builder, spawn:
Task [ui-builder] model: opus: "Build UI components for [feature].
Aesthetic Direction (from design doc):
- Tone: [tone]
- Memorable element: [what stands out]
- Typography: [fonts]
- Color strategy: [approach]
- Motion: [philosophy]
Figma: [URL if available]
Files to create: [list from implementation plan]
Interface rules: ${CLAUDE_PLUGIN_ROOT}/rules/interface/
Project rules: .ruler/react.md, .ruler/tailwind.md
Apply the aesthetic direction to every decision. Make it memorable, not generic."
Fetch Figma context (if available):
mcp__figma__get_design_context: fileKey, nodeId
mcp__figma__get_screenshot: fileKey, nodeId
After completing ALL UI tasks — spawn designer review:
Task [designer] model: opus: "Review the completed UI implementation.
Aesthetic Direction (from design doc):
- Tone: [tone]
- Memorable element: [what stands out]
- Typography: [fonts]
- Color strategy: [approach]
Files: [list of UI component files]
Figma: [URL if available]
Check for:
- Generic AI aesthetics (Inter, purple gradients, cookie-cutter layouts)
- Deviation from aesthetic direction
- Missing memorable moments
- Inconsistent application of design system
- Accessibility concerns
- Missing states (loading, error, empty)"
Address any review findings before proceeding.
When implementing unfamiliar library APIs:
mcp__context7__resolve-library-id: "[library name]"
mcp__context7__get-library-docs: "[library ID]" topic: "[specific feature]"
Use current documentation to ensure correct API usage.
After completing all tasks:
Spawn parallel build agents for speed:
Task [fixer] model: haiku: "Run TypeScript check (tsc --noEmit) and fix any errors. Report results."
Task [fixer] model: haiku: "Run Biome check (biome check --write .) and fix any issues. Report results."
Wait for agents to complete. If issues found, fix before proceeding.
Run test suite:
pnpm test
If tests fail, spawn debugger to investigate.
This is the whole-plan gate. Per-task spec reviews catch issues within tasks — this catches tasks that were skipped, partially implemented, or scope that crept in.
git diff --name-only main...HEAD
Task [plan-completion-reviewer] model: sonnet: "Verify the entire implementation matches the original plan.
ORIGINAL PLAN:
[paste full plan text]
FILES CHANGED:
[paste git diff file list]
TEST RESULTS:
[paste test summary — N passing, N failing]
Read each file referenced in the plan. Verify every task was implemented substantively.
Check for skipped tasks, partial implementations, and scope creep.
See ${CLAUDE_PLUGIN_ROOT}/agents/build/plan-completion-reviewer.md"
If plan-completion-reviewer finds issues:
Do NOT proceed to Phase 6 until plan-completion-reviewer passes.
If e2e tests were created as part of this implementation:
Spawn e2e-runner agent:
Task [e2e-runner] model: opus: "Run E2E tests for the feature we just implemented.
Test files: [list e2e test files]
Feature: [brief description]
Run tests, fix any failures, and iterate until all pass or report blockers.
See ${CLAUDE_PLUGIN_ROOT}/agents/build/e2e-runner.md for protocol."
Why a separate agent?
Wait for agent to complete. Review its summary of fixes applied.
For significant features, offer parallel review:
"Feature complete. Run expert review before PR?"
If yes, spawn review agents in parallel (all use sonnet):
Task [simplicity-engineer] model: sonnet: "Review implementation for unnecessary complexity.
Files: [list of new/modified files]
See ${CLAUDE_PLUGIN_ROOT}/agents/review/simplicity-engineer.md"
Task [architecture-engineer] model: sonnet: "Review implementation for architectural concerns.
Files: [list of new/modified files]
See ${CLAUDE_PLUGIN_ROOT}/agents/review/architecture-engineer.md"
Add a conditional third reviewer based on what was built:
| If the implementation includes... | Also spawn |
|---|---|
| Auth, sessions, API keys, user data | security-engineer |
| Significant UI (components, pages) | senior-engineer |
| Database migrations, data models | data-engineer |
Present findings as Socratic questions (see ${CLAUDE_PLUGIN_ROOT}/references/review-patterns.md).
Blockers → fix → re-verify (max 2 cycles). Should-fix → fix if quick, otherwise note as follow-up.
Before shipping, check if documentation may need updating:
docs/**/*.md, docs/**/*.mdx, content/**/*.mdEnsure all tests pass:
pnpm test
pnpm lint
Create PR:
git push -u origin feature/<feature-name>
gh pr create --title "feat: <description>" --body "$(cat <<'EOF'
## Summary
- What was built
- Key decisions
## Testing
- [X] Unit tests added
- [X] E2E tests added (if applicable)
- [X] All tests passing
## Screenshots
[Include if UI changes]
## Design Doc
[Link to design doc]
## Implementation Plan
[Link to implementation plan]
EOF
)"
Report to user:
Cleanup worktree (optional):
cd ..
git worktree remove .worktrees/<feature-name>
Kill orphaned subagent processes:
After spawning multiple build agents, some may not exit cleanly. Run cleanup:
${CLAUDE_PLUGIN_ROOT}/scripts/cleanup-orphaned-agents.sh
This is especially important after parallel agent runs.
</process><when_to_stop> STOP and ask user when:
Don't guess. Ask. </when_to_stop>
<progress_context>
Use Read tool: docs/progress.md (first 50 lines)
Look for related ideate sessions and any prior implementation attempts. </progress_context>
<progress_append> After completing implementation (or pausing), append to progress journal:
## YYYY-MM-DD HH:MM — /arc:implement
**Task:** [Feature name]
**Outcome:** [Complete / In Progress (X/Y tasks) / Blocked]
**Files:** [Key files created/modified]
**Agents spawned:** [list of agents used]
**Decisions:**
- [Key implementation decision]
**Next:** [PR created / Continue tomorrow / Blocked on X]
---
</progress_append>
<arc_log>
After completing this skill, append to the activity log.
See: ${CLAUDE_PLUGIN_ROOT}/references/arc-log.md
Entry: /arc:implement — [Feature name] ([X/Y] tasks complete)
</arc_log>
<success_criteria> Execution is complete when:
<tool_restrictions_reminder>
REMINDER: You must NEVER call EnterPlanMode or ExitPlanMode at any point during this skill — not at the start, not after creating the plan, not before implementation, not at the end. This skill manages its own flow. All output goes directly to the user as normal messages.
</tool_restrictions_reminder>
Activates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
Search, retrieve, and install Agent Skills from the prompts.chat registry using MCP tools. Use when the user asks to find skills, browse skill catalogs, install a skill for Claude, or extend Claude's capabilities with reusable AI agent components.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.