This skill should be used when designing implementation plans, decomposing complex work into tasks, or making architectural decisions during ultrawork sessions. Used by orchestrator (interactive mode) and planner agent (auto mode).
Creates implementation plans by analyzing code context and decomposing complex work into executable tasks.
npx claudepluginhub mnthe/hardworker-marketplaceThis skill inherits all available tools. When active, it can use any tool Claude has access to.
references/brainstorm-protocol.mdreferences/context-aware-options.mdreferences/design-template.mdreferences/interview-rounds.mdreferences/task-examples.mdDefine how to analyze context, make design decisions, and decompose work into tasks.
Two modes:
Read session files in order:
session.json - goal and metadatacontext.json - summary, key files, patterns from explorersexploration/*.md - detailed findings as neededAnalyze goal and context to determine interview depth:
| Complexity | Files | Keywords | Impact | Rounds |
|---|---|---|---|---|
| trivial | 1-2 | fix, typo, add | None | 1 (4-5 Q) |
| standard | 3-5 | implement, create | Single module | 2 (8-10 Q) |
| complex | 6-10 | refactor, redesign | Multi-module | 3 (12-15 Q) |
| massive | 10+ | migrate, rewrite | Entire system | 4 (16-20 Q) |
Note: User can request more rounds via adaptive check. No upper limit.
Skip if: --auto or --skip-interview flag set
Each round asks 4-5 questions using AskUserQuestion (max 4 questions per call).
Options marked [...] MUST be generated from exploration context, NOT generic templates.
See references/context-aware-options.md for:
Rounds are adjusted based on complexity assessment:
| Complexity | Rounds | Focus Areas |
|---|---|---|
| trivial | 1 | Intent, Scope, Success criteria |
| standard | 2 | + Technical decisions (arch, tech stack, testing) |
| complex | 3 | + Edge cases, errors, concurrency, performance |
| massive | 4 | + UI/UX, observability, documentation, deployment |
Note: User can request additional rounds via adaptive check. No upper limit.
See references/interview-rounds.md for:
IMPORTANT: Design documents go to PROJECT directory.
WORKING_DIR=$(bun $SCRIPTS/session-get.js --session ${CLAUDE_SESSION_ID} --field working_dir)
DESIGN_PATH="$WORKING_DIR/docs/plans/$(date +%Y-%m-%d)-{goal-slug}-design.md"
Write comprehensive design.md including:
See references/design-template.md for complete template.
After writing the design document, validate it via Codex doc-review.
bun $SCRIPTS/codex-verify.js \
--mode doc-review \
--design "$DESIGN_PATH" \
--goal "${GOAL}" \
--output /tmp/codex-doc-${CLAUDE_SESSION_ID}.json
Result Handling:
| Verdict | Action |
|---|---|
| PASS | Continue to Phase 5 (Task Decomposition) |
| SKIP | Codex not installed — continue (graceful degradation) |
| FAIL | Fix loop based on mode |
Interactive Mode (default):
doc_issues to user via AskUserQuestionAuto Mode (--auto):
doc_issues from resultCLI Error: Retry once on execution failure. On repeated failure, report to user.
Note: A PreToolUse gate blocks session-update.js --phase EXECUTION without a passing Codex doc-review result at /tmp/codex-doc-${CLAUDE_SESSION_ID}.json.
| Aspect | Rule |
|---|---|
| Granularity | One deliverable, ~30 min work, testable |
| Complexity | standard (sonnet) for CRUD/simple; complex (opus) for architecture/security |
| Dependencies | Independent [], Sequential ["1"], Multi ["1","2"], Verify [all] |
Use task-create.js for each task. Always include a final verify task.
See references/task-examples.md for:
Return planning summary with:
| Flag | Effect on Interview |
|---|---|
| (default) | Full Deep Interview based on complexity |
--skip-interview | Skip interview, use ad-hoc AskUserQuestion as needed |
--auto | Skip interview, auto-decide all choices |
references/brainstorm-protocol.md - Interactive question flow and approach explorationreferences/context-aware-options.md - Context-aware option generation rules and examplesreferences/design-template.md - Complete design document templatereferences/interview-rounds.md - Detailed interview round templates for all complexity levelsreferences/task-examples.md - Task decomposition examples with script commandsActivates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
Search, retrieve, and install Agent Skills from the prompts.chat registry using MCP tools. Use when the user asks to find skills, browse skill catalogs, install a skill for Claude, or extend Claude's capabilities with reusable AI agent components.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.