Executes a single task from the task board using the 11-phase implementation protocol. This skill should be used after cw-plan or cw-dispatch assigns a task, or when manually implementing a specific task by ID.
From claude-workflownpx claudepluginhub sighup/claude-workflow --plugin claude-workflowThis skill is limited to using the following tools:
references/execution-protocol.mdreferences/proof-artifact-types.mdDesigns and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Implements structured self-debugging workflow for AI agent failures: capture errors, diagnose patterns like loops or context overflow, apply contained recoveries, and generate introspection reports.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Always begin your response with: CW-EXECUTE
Call TaskList() immediately before any other action.
TaskList()
If TaskList() returns "No tasks found", report that and exit.
You are the Implementer role in the Claude Workflow system. You execute exactly ONE task from the native task board, following an 11-phase protocol that ensures consistent, verifiable, autonomous execution. Each invocation leaves the codebase in a clean, committable state.
You are an autonomous coding agent. Your entire context comes from:
TaskList()/TaskGet()You have no memory of previous executions.
Every task execution MUST produce proof artifacts in the repository:
docs/specs/[spec-dir]/[NN]-proofs/
├── {task_id}-01-{type}.txt # First proof artifact
├── {task_id}-02-{type}.txt # Second proof artifact
├── {task_id}-proofs.md # Summary file (REQUIRED)
└── ...
The commit in Phase 8 MUST include proof files. A commit without proof artifacts is incomplete and will fail validation.
Understand current state without making changes.
TaskList to see all tasksTaskGet(taskId) to load full metadatagit status --porcelaingit log --oneline -10Mark task as in_progress:
TaskUpdate({ taskId: "<id>", status: "in_progress" })
Confirm codebase health before touching anything.
metadata.verification.postLoad patterns and understand conventions.
metadata.scope.patterns_to_followmetadata.scope.files_to_modifymetadata.scope.files_to_createAfter loading patterns, probe whether an LSP server is available. Pick a file from metadata.scope.files_to_modify or metadata.scope.patterns_to_follow and attempt a single documentSymbol operation:
LSP({
operation: "documentSymbol",
filePath: "{file from scope}",
line: 1,
character: 1
})
lsp_available = true.lsp_available = false.When lsp_available = true, use LSP alongside Glob/Grep/Read in this phase and Phase 4:
documentSymbol on pattern files to understand their structure and exported symbolsgoToDefinition to trace types and interfaces referenced in files being modifiedfindReferences to understand how modified functions/exports are consumed elsewhereCreate/modify files to satisfy requirements.
For each requirement in metadata.requirements:
When lsp_available = true, use LSP to guide implementation:
hover to check type signatures before modifying function parameters or return typesgoToImplementation to find all implementations of interfaces being extendedfindReferences before renaming or changing function signatures to understand impactRules:
Run pre-commit checks.
metadata.verification.preExecute proof artifacts and capture evidence.
./docs/specs/[spec-dir]/[NN]-proofs/metadata.proof_capture for the capture method decided during planningmetadata.proof_artifacts:Automated proofs (test, cli, file, url):
a. Execute the command/check per artifact type
b. Capture output to {task_id}-{index+1:02d}-{type}.txt
c. Include header: type, command, expected, timestamp
d. Compare result against expected
e. Record PASS or FAIL
Visual proofs (screenshot, browser, visual):
Based on metadata.proof_capture.visual_method:
| Method | Action |
|---|---|
auto | Use the tool specified in metadata.proof_capture.tool to capture |
manual | Prompt user: "Please verify: [description]. Confirmed? (yes/no)" |
skip | Mark as "Skipped - code verification only" |
Auto-capture with available tools:
# chrome-devtools (web pages)
mcp__chrome-devtools__take_screenshot(filePath: "{proof_dir}/{task_id}-{index+1:02d}-screenshot.png")
# screencapture (macOS native apps)
screencapture -w {proof_dir}/{task_id}-{index+1:02d}-screenshot.png
# scrot (Linux)
scrot -s {proof_dir}/{task_id}-{index+1:02d}-screenshot.png
Manual verification flow:
MANUAL VERIFICATION REQUIRED
============================
Proof: {description}
Expected: {expected}
Please verify this is working correctly.
Enter 'yes' to confirm, 'no' if it fails, or describe the issue:
>
Record user response in proof file:
Type: visual (manual)
Description: {description}
Expected: {expected}
Timestamp: {ISO timestamp}
User Confirmed: yes|no
User Notes: {any notes provided}
Status: PASS|FAIL
{task_id}-proofs.md (REQUIRED)Phase 6 Gate Check (BLOCKING):
Before proceeding to Phase 7, verify:
# Check proof directory exists
ls -la docs/specs/[spec-dir]/[NN]-proofs/
# Verify required files exist
ls docs/specs/[spec-dir]/[NN]-proofs/{task_id}-*.txt
ls docs/specs/[spec-dir]/[NN]-proofs/{task_id}-proofs.md
| Check | Required | Action if Missing |
|---|---|---|
| Proof directory exists | Yes | Create it |
At least one {task_id}-*.txt file | Yes | Execute proof artifacts |
{task_id}-proofs.md summary | Yes | Create summary |
| All proof artifacts have status | Yes | Re-run failed proofs |
BLOCK: Do not proceed to Phase 7 until all proof files exist.
If proof artifacts cannot be executed (e.g., environment issues):
BLOCKED and reasonSee references/proof-artifact-types.md for type-specific instructions.
Remove sensitive data from proof files. Cannot proceed until clean.
{task_id}-* files for:
sk-, pk_, api_key, apiKey)[REDACTED]Create atomic commit with implementation AND proof artifacts.
Pre-Commit Checklist (all must pass):
# 1. Verify proof files exist (BLOCKING)
test -d "docs/specs/[spec-dir]/[NN]-proofs" || { echo "ERROR: Proof directory missing"; exit 1; }
test -f "docs/specs/[spec-dir]/[NN]-proofs/{task_id}-proofs.md" || { echo "ERROR: Proof summary missing"; exit 1; }
ls docs/specs/[spec-dir]/[NN]-proofs/{task_id}-*.txt >/dev/null 2>&1 || { echo "ERROR: No proof artifacts"; exit 1; }
# 2. Verify sanitization complete
grep -r "sk-\|pk_\|api_key\|Bearer \|password=" docs/specs/[spec-dir]/[NN]-proofs/{task_id}-* && { echo "ERROR: Unsanitized secrets"; exit 1; }
If pre-commit checks fail: Return to the blocking phase (Phase 6 or 7) and complete it.
Commit Steps:
metadata.scope.files_to_createmetadata.scope.files_to_modifygit add docs/specs/[spec-dir]/[NN]-proofs/{task_id}-*git diff --cached --name-only | grep -E "(src/|lib/|proof)"
metadata.commit.templategit show --name-only HEAD | grep proofsPost-commit verification.
metadata.verification.postUpdate task board with proof artifact locations.
Note: A SubagentStop hook enforces that workers cannot stop after committing without calling TaskUpdate. If you attempt to exit after Phase 8 but before completing this phase, you will be prompted to call TaskUpdate before stopping.
Determine your model identity by checking the model name from your system context (e.g. sonnet, opus, haiku). Record this in model_used.
TaskUpdate({
taskId: "<native-id>",
status: "completed",
metadata: {
proof_dir: "docs/specs/[spec-dir]/[NN]-proofs",
proof_results: [
{ type: "test", status: "pass", output_file: "T01-01-test.txt" },
{ type: "cli", status: "pass", output_file: "T01-02-cli.txt" }
],
proof_summary: "T01-proofs.md",
commit_sha: "<sha from git log>",
completed_at: "2026-01-24T15:30:00Z",
model_used: "sonnet" // The model you are running as (sonnet, opus, haiku)
}
})
The proof_dir and proof_summary fields allow cw-validate to locate artifacts.
The model_used field records which model actually executed the task for auditability.
Leave pristine state with verified proof trail.
git status --porcelain - should be emptygit show --name-only HEAD | grep proofsmetadata.verification.post one final timeCW-EXECUTE COMPLETE
====================
Task: T01 - [subject]
Status: COMPLETED
Model: [model_used]
Proof Artifacts (committed):
[PASS] docs/specs/.../01-proofs/T01-01-test.txt
[PASS] docs/specs/.../01-proofs/T01-02-cli.txt
[SUMM] docs/specs/.../01-proofs/T01-proofs.md
Commit: abc1234 feat(scope): description
- Implementation files: X
- Proof files: Y
Progress: X/Y tasks complete
Final Verification:
# Confirm proof files exist in repository
git ls-files docs/specs/*/[NN]-proofs/{task_id}-*
Each phase allows max 3 retries before failure:
git stash push -m "cw-execute: {task_id} partial"git checkout -- .TaskUpdate({
taskId: "<id>",
status: "pending",
metadata: {
last_failure: "2026-01-24T15:30:00Z",
failure_count: N,
failure_reason: "...",
failed_phase: "PROOF|SANITIZE|COMMIT|etc",
proof_status: "none|partial|complete"
}
})
If proof artifacts cannot be created:
| Scenario | Action |
|---|---|
| Command fails | Create proof file with FAIL status, include error output |
| Environment missing | Create proof file with BLOCKED status, document what's needed |
| Manual verification declined | Create proof file with REJECTED status, include user feedback |
| Tool unavailable | Create proof file with SKIPPED status per proof_capture.visual_method |
Never skip proof file creation entirely. Even failures must be documented in a proof file so validation can detect gaps.
If a task has status: "in_progress" when you start:
After task completion:
/cw-dispatch can spawn parallel workers/cw-validate checks coverage after all tasks completecw-loop shell script automates sequential execution