From workflows
Internal skill used by dev-implement during Phase 5 of /dev workflow. NOT user-facing - should only be invoked by dev-ralph-loop inside each implementation iteration.
npx claudepluginhub edwinhu/workflows --plugin workflowsThis skill uses the workspace's default tool permissions.
**Announce:** "I'm using dev-delegate to dispatch implementation subagents."
Implements Playwright E2E testing patterns: Page Object Model, test organization, configuration, reporters, artifacts, and CI/CD integration for stable suites.
Guides Next.js 16+ Turbopack for faster dev via incremental bundling, FS caching, and HMR; covers webpack comparison, bundle analysis, and production builds.
Discovers and evaluates Laravel packages via LaraPlugins.io MCP. Searches by keyword/feature, filters by health score, Laravel/PHP compatibility; fetches details, metrics, and version history.
Announce: "I'm using dev-delegate to dispatch implementation subagents."
EVERY IMPLEMENTATION MUST GO THROUGH A TASK AGENT. This is not negotiable.
Main chat MUST NOT:
If you're about to write code in main chat, STOP. Spawn a Task agent instead.
Main Chat Task Agent
─────────────────────────────────────────────────────
dev-implement (orchestrates)
→ dev-ralph-loop (per-task loops)
→ dev-delegate (this skill)
→ spawns Task agent ──────→ follows dev-tdd
uses dev-test tools
Main chat uses this skill to spawn Task agents.
Task agents follow dev-tdd (TDD protocol) and use dev-test (testing tools).
Fresh subagent per task + two-stage review = high quality, fast iteration
Called by dev-implement inside each ralph loop iteration. Don't invoke directly.
For each task:
1. Dispatch implementer subagent
- If questions → answer, re-dispatch
- Implements, tests, commits
2. Dispatch spec reviewer subagent
- If issues → implementer fixes → re-review
3. Dispatch quality reviewer subagent
- If issues → implementer fixes → re-review
4. Mark task complete
Pattern: Use structured delegation template from references/delegation-template.md
Every delegation MUST include:
Use this Task invocation (fill in brackets):
Task(subagent_type="workflows:dev-implementer", prompt="""
# TASK
Implement: [TASK NAME]
## EXPECTED OUTCOME
You will have successfully completed this task when:
- [ ] [Success criterion 1]
- [ ] [Success criterion 2]
- [ ] Tests pass (TDD enforced)
- [ ] No regressions in existing tests
## REQUIRED SKILLS
This task requires:
- [Language/framework]: [Why]
- Testing: TDD (test-first mandatory)
- [Other skills as needed]
## REQUIRED TOOLS
You will need:
- Read: Examine existing code
- Write: Create new files
- Edit: Modify existing files
- Bash: Run tests and verify
**Tools denied:** None (full implementation access)
## MUST DO
- [ ] Write test FIRST (TDD RED-GREEN-REFACTOR)
- [ ] Run test suite after each change
- [ ] Follow existing code patterns in [file]
- [ ] [Other non-negotiable requirements]
## MUST NOT DO
- ❌ Write code before test
- ❌ Skip test execution
- ❌ Use `any` / `@ts-ignore` / type suppression
- ❌ Commit broken code
## DEVIATION RULES
You WILL discover unplanned work. Apply these rules and track all deviations:
| Rule | Trigger | Action |
|------|---------|--------|
| **R1: Bug** | Broken behavior, errors, type errors, security vulns | Auto-fix → test → track `[Rule 1 - Bug]` |
| **R2: Missing Critical** | Missing error handling, validation, auth, logging | Auto-fix → test → track `[Rule 2 - Missing Critical]` |
| **R3: Blocking** | Missing deps, wrong types, broken imports | Auto-fix → test → track `[Rule 3 - Blocking]` |
| **R4: Architectural** | New DB table, schema change, switching libs, breaking API | **STOP → escalate to delegator** |
**Rules 1-3:** Fix automatically, test, document what you did.
**Rule 4:** Do NOT proceed. Report the architectural decision needed back to the delegator with: what you found, proposed change, why needed, impact, and alternatives.
**Unsure which rule?** Default to Rule 4 (STOP and escalate).
In your output, include: **Deviations:** N auto-fixed (R1: X, R2: Y, R3: Z), N escalated (R4: W).
## CONTEXT
### Task Description
[PASTE FULL TASK TEXT FROM PLAN.md - don't make subagent read file]
### Project Context
- Project: [brief description]
- Related files: [list from exploration]
- Test command: [from SPEC.md]
## TDD Protocol (MANDATORY)
<EXTREMELY-IMPORTANT>
**LOAD THIS SKILL FIRST:**
Before writing any code, you MUST load the TDD skill:
Read ${CLAUDE_SKILL_DIR}/../../skills/dev-tdd/SKILL.md and follow its instructions.
This loads:
- Task reframing (your job is writing tests, not features)
- The Execution Gate (6 mandatory gates before E2E testing)
- GATE 5: READ LOGS (mandatory - cannot skip)
- The Iron Law of TDD (test-first approach)
**Load dev-tdd now before proceeding.**
</EXTREMELY-IMPORTANT>
Follow the RED-GREEN-REFACTOR cycle from dev-tdd:
1. **RED**: Write a failing test FIRST
- Run it, SEE IT FAIL
- Document: "RED: [test] fails with [error]"
2. **GREEN**: Write MINIMAL code to pass
- Run test, SEE IT PASS
- Document: "GREEN: [test] passes"
3. **REFACTOR**: Clean up while staying green
**If you write code before seeing RED, you're not doing TDD. Stop and restart.**
## Testing Tools
For test options (pytest, Playwright, ydotool), load dev-test skill after dev-tdd.
Tests must EXECUTE code and VERIFY behavior. Grepping is NOT testing.
## If Unclear
Ask questions BEFORE implementing. Don't guess.
## Output
Report:
- RED: What test failed and how
- GREEN: What made it pass
- Test command and output
- Commit SHA
- Any concerns
""")
If implementer asks questions: Answer clearly, then re-dispatch with answers included.
If implementer finishes: Proceed to spec review.
Use this Task invocation:
Task(subagent_type="general-purpose",
allowed_tools=["Read", "Glob", "Grep", "Bash(read-only)"],
prompt="""
Review spec compliance for: [TASK NAME]
## Original Requirements
[PASTE TASK TEXT FROM PLAN.md]
## Success Criteria (from SPEC.md)
[PASTE RELEVANT CRITERIA]
## CRITICAL: Do Not Trust the Report
The implementer finished suspiciously quickly. Their report may be incomplete,
inaccurate, or optimistic. You MUST verify everything independently.
**DO NOT:**
- Take their word for what they implemented
- Trust their claims about completeness
- Accept their interpretation of requirements
**DO:**
- Read the actual code they wrote
- Compare actual implementation to requirements line by line
- Check for missing pieces they claimed to implement
- Look for extra features they didn't mention
## Review Checklist
1. Does implementation meet ALL requirements?
2. Is anything MISSING from the spec?
3. Is anything EXTRA not in the spec?
## Output Format
- COMPLIANT: All requirements met, nothing extra (after verifying code yourself)
- ISSUES: List what's missing or extra with file:line references
Be strict. "Close enough" is not compliant. Verify by reading code.
""")
If COMPLIANT: Proceed to quality review.
If ISSUES: Have implementer fix, then re-run spec review.
Use this Task invocation:
Task(subagent_type="general-purpose",
allowed_tools=["Read", "Glob", "Grep", "Bash(read-only)"],
prompt="""
Review code quality for: [TASK NAME]
## Changes to Review
Files modified: [list files]
Commit range: [BASE_SHA]..[HEAD_SHA]
## Review Focus
1. Code correctness (logic errors, edge cases)
2. Test coverage (are tests meaningful?)
3. Code style (matches project conventions?)
4. No regressions introduced
## Confidence Scoring
Rate each issue 0-100. Only report issues >= 80 confidence.
## Output Format
### Strengths
- [what's good]
### Issues (Confidence >= 80)
#### [Issue Title] (Confidence: XX)
- Location: [file:line]
- Problem: [description]
- Fix: [suggestion]
### Verdict
APPROVED or CHANGES REQUIRED
""")
If APPROVED: Mark task complete, move to next task.
If CHANGES REQUIRED: Have implementer fix, then re-run quality review.
When you say "Task complete", you are asserting:
If ANY of these didn't happen, you are not "summarizing" or "moving on" - you are creating rework for the user.
Skipping verification destroys the user's time. "Still working" protects it.
These thoughts mean STOP—you're about to skip delegation:
| Thought | Reality |
|---|---|
| "I'll just fix this quickly" | Quick = sloppy = bugs. Delegate. |
| "The subagent will be slower" | Subagent time is cheap. Your context is expensive. |
| "I already know what to do" | Knowing ≠ doing correctly. Delegate. |
| "It's just one line" | One line can break everything. Delegate. |
| "I'm already looking at the code" | Looking ≠ editing. Delegate. |
| "User is waiting" | User wants CORRECT, not fast. Delegate. |
| "Skip review, it's obviously right" | "Obviously" is how bugs ship. Review. |
| "Spec review passed, skip quality" | Both reviews exist for a reason. Do both. |
| "Quality review found nothing, done" | Did spec review pass first? Check. |
Never:
If subagent fails:
Me: Implementing Task 1: Add user validation
[Dispatch implementer with full task text]
Implementer: "Should validation happen client-side or server-side?"
Me: "Server-side only, in the API layer"
[Re-dispatch implementer with answer]
Implementer:
- Added validateUser() in api/users.ts
- Tests: 5/5 passing
- Committed: abc123
[Dispatch spec reviewer]
Spec Reviewer: ISSUES
- Missing: Email format validation (spec line 12)
[Tell implementer to fix]
Implementer:
- Added email regex validation
- Tests: 6/6 passing
- Committed: def456
[Re-dispatch spec reviewer]
Spec Reviewer: COMPLIANT
[Dispatch quality reviewer with commit range]
Quality Reviewer: APPROVED
- Strengths: Good test coverage, clear naming
- No issues >= 80 confidence
[Mark Task 1 complete, move to Task 2]
When dispatching subagents, match model capability to task complexity. This is advisory — Claude Code doesn't yet support model routing — but documents intent for cost-aware delegation.
| Task Complexity | Model Tier | Signals | Example |
|---|---|---|---|
| Mechanical | Cheapest capable | Isolated function, 1-2 files, clear spec, boilerplate | "Add type definition file" |
| Integration | Standard | Multi-file coordination, pattern matching, debugging within scope | "Connect auth service to API route" |
| Architecture/Review | Most capable | Design judgment needed, broad codebase understanding, reviews | "Review entire implementation for spec compliance" |
Complexity signals:
When in doubt, use the standard tier. Over-allocating is wasteful; under-allocating produces poor results.
Main chat invokes:
dev-implement → dev-ralph-loop → dev-delegate (this skill)Task agents follow:
dev-tdd - TDD protocol (RED-GREEN-REFACTOR)dev-test - Testing tools (pytest, Playwright, ydotool)After all tasks complete with passing tests, dev-implement proceeds to dev-review.