From workflows
Asks targeted clarification questions based on codebase exploration findings.
npx claudepluginhub edwinhu/workflows --plugin workflowsThis skill uses the workspace's default tool permissions.
**Announce:** "I'm using dev-clarify (Phase 3) to resolve ambiguities."
Implements Playwright E2E testing patterns: Page Object Model, test organization, configuration, reporters, artifacts, and CI/CD integration for stable suites.
Guides Next.js 16+ Turbopack for faster dev via incremental bundling, FS caching, and HMR; covers webpack comparison, bundle analysis, and production builds.
Discovers and evaluates Laravel packages via LaraPlugins.io MCP. Searches by keyword/feature, filters by health score, Laravel/PHP compatibility; fetches details, metrics, and version history.
Announce: "I'm using dev-clarify (Phase 3) to resolve ambiguities."
Ask targeted questions based on what exploration revealed. Prerequisite: Exploration phase complete, key files read.
## The Iron Law of ClarificationASK BEFORE DESIGNING. This is not negotiable.
After exploration, you now know:
Use this knowledge to ask informed questions about:
If you catch yourself about to design without resolving ambiguities, STOP.
| Excuse | Reality | Do Instead |
|---|---|---|
| "The pattern choice is obvious" | Multiple patterns exist for a reason | ASK which to follow |
| "I can decide edge cases myself" | Your assumptions don't match user expectations | ASK for clarification |
| "This is a small detail" | Small details cause big bugs | ASK about edge cases now |
| "I'll handle integration points during implementation" | Wrong integration breaks everything | CLARIFY integration NOW |
| "The exploration gave me enough info" | Code tells you HOW, not WHAT SHOULD happen | ASK for requirements, not just patterns |
| "I can make a reasonable assumption" | Reasonable != correct | ASK, don't assume |
| "Asking too many questions annoys users" | Building wrong thing annoys users more | ASK clarifying questions |
Assuming user requirements without asking is NOT HELPFUL — you build the wrong thing and create hours of rework.
You explored the codebase and found patterns. But patterns show HOW things work, not WHAT the user wants. Clarification bridges this gap.
Asking costs minutes. Wrong assumptions cost hours of rework.
After updating .planning/SPEC.md with all clarified requirements, IMMEDIATELY invoke:
Read ${CLAUDE_SKILL_DIR}/../../skills/dev-design/SKILL.md and follow its instructions.
DO NOT:
The workflow phases are SEQUENTIAL. Complete clarify → immediately start design.
| DO | DON'T |
|---|---|
| Ask questions based on exploration | Ask vague/generic questions |
| Reference specific code patterns found | Repeat questions from brainstorm |
| Clarify integration points | Propose approaches (that's design) |
| Resolve edge cases | Make assumptions |
| Update SPEC.md with answers | Skip to implementation |
Clarify answers: WHAT EXACTLY should happen in specific scenarios Design answers: HOW to build it (next phase)
Before asking questions, review:
Common areas needing clarification after exploration:
Integration Points:
Edge Cases:
Scope Boundaries:
Behavior Choices:
file.ts:23 and Pattern B in other.ts:45. Which should we follow?"Present questions with context from exploration:
AskUserQuestion(questions=[{
"question": "The auth middleware at src/middleware/auth.ts:78 validates tokens synchronously. The new endpoint needs user data. Should we: validate synchronously (faster, simpler) or fetch fresh user data (slower, always current)?",
"header": "Auth pattern",
"options": [
{"label": "Sync validation (Recommended)", "description": "Faster, uses cached token claims, matches existing patterns"},
{"label": "Fresh fetch", "description": "Slower, always current, needed if user data changes frequently"}
],
"multiSelect": false
}])
Key principles:
When multiple ambiguities are discovered during exploration, batch them into ONE AskUserQuestion call instead of asking sequentially:
Sequential (slow — 5 round-trips):
Batched (fast — 1 round-trip):
AskUserQuestion(questions=[
{"question": "API style?", "options": [{"label": "REST"}, {"label": "GraphQL"}]},
{"question": "Auth mechanism?", "options": [{"label": "JWT"}, {"label": "Session-based"}]},
{"question": "Pagination?", "options": [{"label": "Yes, cursor-based"}, {"label": "Yes, offset"}, {"label": "Not needed"}]}
], multiSelect=false)
When to batch: After reading exploration findings, if 3+ questions arise, batch them. Present all ambiguities at once with options and pros/cons for each.
When NOT to batch: If a question's answer changes what other questions to ask (dependent questions), ask the blocking question first, then batch the rest.
After each answer, update .planning/SPEC.md:
## Clarified Requirements
### Auth Pattern
- Decision: Sync validation
- Rationale: Matches existing patterns, user data changes infrequently
- Reference: src/middleware/auth.ts:78
### Edge Case: Expired Token
- Decision: Return 401, let client refresh
- Rationale: Consistent with other endpoints
Before proceeding to design, ensure testing strategy is clear:
AskUserQuestion(questions=[{
"question": "No test infrastructure was found. How should we verify this feature works?",
"header": "Testing",
"options": [
{"label": "Add pytest/jest as Task 0 (Recommended)", "description": "Set up test framework before implementing feature"},
{"label": "Add E2E tests with Playwright", "description": "Browser automation to test user interactions"},
{"label": "Add E2E tests with ydotool", "description": "Desktop automation for native apps"},
{"label": "Other (describe in chat)", "description": "Propose alternative testing approach"}
],
"multiSelect": false
}])
"Manual testing" is NOT an acceptable answer. If user insists on manual testing:
Do NOT proceed to design without a clear automated testing strategy.
After user chooses testing approach, clarify specifics:
AskUserQuestion(questions=[{
"question": "What's the FIRST test you want to see fail?",
"header": "First Test",
"options": [
{"label": "Happy path - feature works correctly", "description": "Test the main success scenario"},
{"label": "Error case - feature handles bad input", "description": "Test error handling"},
{"label": "Edge case - specific boundary condition", "description": "Test a known edge case"},
{"label": "Integration - feature works with existing code", "description": "Test system integration"}
],
"multiSelect": false
}])
Why this matters: Defining the first test BEFORE implementation is the essence of TDD.
Based on code path discovery from exploration, clarify the exact workflow:
AskUserQuestion(questions=[{
"question": "Let me confirm the user workflow the test must replicate:",
"header": "Workflow",
"options": [
{"label": "Confirm workflow", "description": "[State the discovered workflow, e.g., 'highlight → click panel → see status']"},
{"label": "Modify workflow", "description": "The workflow is different - let me describe it"},
{"label": "Add steps", "description": "The workflow has additional steps I should know"}
],
"multiSelect": false
}])
Then verify the test approach matches:
AskUserQuestion(questions=[{
"question": "The test must use [discovered protocol, e.g., WebSocket]. Is this correct?",
"header": "Protocol",
"options": [
{"label": "Yes, use [protocol]", "description": "Test must use the same protocol as production"},
{"label": "No, different protocol", "description": "Explain the correct protocol"}
],
"multiSelect": false
}])
After clarifying workflow and protocol, verify:
[ ] Test workflow matches user workflow exactly
[ ] Test uses same protocol as production
[ ] Test interacts with same UI elements user sees
[ ] Test verifies same output user verifies
[ ] Testing skill is appropriate for this workflow
If any of these don't match, the test will be FAKE. Clarify now.
Before proceeding to design, verify the testing strategy passes real-test enforcement. See references/constraints/real-test-enforcement.md for what constitutes a REAL vs FAKE test.
If the test approach doesn't match what you discovered, STOP and clarify.
| Your Drive | Why You Skip | What Actually Happens | The Drive You Failed |
|---|---|---|---|
| Helpfulness | "I don't want to bother the user with questions" | You build the wrong thing — your courtesy was sabotage | Anti-helpful |
| Competence | "I can infer edge cases from the code" | Code shows HOW, not WHAT SHOULD happen — your inference was a guess | Incompetent |
| Efficiency | "Assumptions are faster than questions" | Wrong assumptions mean rework — your speed was waste | Inefficient |
The protocol is not overhead you pay. It is the service you provide.
| Action | Why It's Wrong | Do Instead |
|---|---|---|
| Ask without exploration context | Questions will be generic | Reference specific code findings |
| Propose architecture | Too early, still clarifying | Ask questions, save design for next phase |
| Make assumptions | Leads to rework | Ask and get explicit answer |
| Skip to design | Ambiguities cause bugs | Resolve all questions first |
Clarification complete when:
.planning/SPEC.md updated with final requirementsCheckpoint type: human-verify
Checkpoint type: decision (user chooses testing approach — cannot auto-advance)
Before proceeding to design, verify in SPEC.md:
[ ] Testing approach documented (unit/integration/E2E)
[ ] Test framework specified (pytest/jest/playwright/etc.)
[ ] First test described (what will fail first)
[ ] Test command documented (how to run tests)
If any box is unchecked → STOP. Do not proceed to design.
Before proceeding to design, verify REAL test criteria:
[ ] User workflow confirmed and documented
[ ] Protocol/transport verified (same as production)
[ ] UI elements to test identified
[ ] Testing skill specified (dev-test-electron/playwright/etc.)
[ ] Test approach matches discovered code paths
If any box is unchecked → You WILL write fake tests. Clarify now.
See references/constraints/real-test-enforcement.md for detection tables and the Iron Law of REAL Tests.
Ask yourself:
If ANY answer is "no" or "not sure" → STOP. Clarify before design.
This is the last checkpoint before implementation planning. Fake tests caught here save hours of wasted implementation.
Phase summary (append to LEARNINGS.md):
## Phase: Clarify
---
phase: clarify
status: completed
requires: [SPEC.md, codebase-map]
provides: [clarified-requirements, testing-strategy-validated]
questions-resolved:
- [one-liner per clarification]
---
REQUIRED SUB-SKILL: After completing clarification, IMMEDIATELY invoke:
Read ${CLAUDE_SKILL_DIR}/../../skills/dev-design/SKILL.md and follow its instructions.