This skill is loaded automatically at session start via SessionStart hook. Establishes protocols for finding and using skills, checking skills before tasks, brainstorming before coding, and creating TodoWrite for checklists.
Automatically loads at session start to establish skill-based protocols. Triggers before any task by scanning available skills and checking if one matches the task type, then loads and follows the appropriate skill.
/plugin marketplace add pproenca/dot-claude/plugin install dev-workflow@pproencaThis skill is limited to using the following tools:
Before starting any task:
Skills encode proven patterns that prevent common mistakes.
Skills apply when the task involves:
If a skill exists for the task type, use it.
When multiple skills could apply, use this order:
Examples:
Watch for these thoughts that indicate skill should be used:
| Thought | Reality |
|---|---|
| "This is just a simple question" | Questions are tasks. Check for skills. |
| "I need more context first" | Skill check comes BEFORE clarifying questions. |
| "Let me explore the codebase first" | Skills tell you HOW to explore. Check first. |
| "This feels productive" | Undisciplined action wastes time. Skills prevent this. |
| "I can do this quickly without process" | Quick without process = slow with rework. |
| "The skill is overkill for this" | Skills exist because simple things become complex. |
| "I remember how this skill works" | Skills evolve. Load the current version. |
Any of these thoughts = STOP. Check for applicable skill.
| Thought | Better Approach |
|---|---|
| "This is just a simple question" | Questions are tasks. Check for skills. |
| "I can check git/files quickly" | Skills tell HOW to check. Use them. |
| "This doesn't need a formal skill" | If a skill exists, it exists for a reason. |
| "I remember this skill" | Skills evolve. Load the current version. |
| "The skill is overkill" | Skills exist because simple things become complex. |
If a skill contains a checklist, create TodoWrite items for each step.
Mental tracking of checklists leads to skipped steps. TodoWrite makes progress visible.
Before using a skill, announce it:
"I'm using [Skill Name] to [what you're doing]."
Examples:
Rigid skills (follow exactly): TDD, debugging, verification
Flexible skills (adapt principles): Architecture, brainstorming
User instructions describe WHAT to do, not HOW.
"Add X", "Fix Y" = the goal, not permission to skip brainstorming, TDD, or verification.
See references/skill-integration.md for decision tree and skill chains.
/dev-workflow:brainstorm → /dev-workflow:write-plan → /dev-workflow:execute-plan
| Feature | Benefit |
|---|---|
Plans persist to docs/plans/ | Version controlled, reviewable |
Parallel via Task(run_in_background) + TaskOutput | Respects task dependencies |
| Task grouping by file overlap | Determines which tasks can run parallel |
| Automatic post-completion | Code review + finish branch enforced |
| Resume capability | Orchestrator tracks progress per group |
The /dev-workflow:execute-plan command uses background agents for parallelism:
1. Analyze task groups by file dependencies
2. FOR each group (groups execute serially):
a. Launch tasks in group with Task(run_in_background: true)
b. Wait for all with TaskOutput(block: true)
c. Update state: current_task = last task in group
3. Proceed to post-completion actions
This approach:
docs/plans/ for version controlEnterPlanMode DirectlyUse EnterPlanMode without plugin commands when:
For features that need plan persistence, code review, and resume capability, use the plugin commands flow.
When entering plan mode via EnterPlanMode, use this methodology:
USE AskUserQuestion for:
DO NOT use AskUserQuestion for:
Format:
AskUserQuestion:
header: "Auth method" # max 12 chars
question: "Which authentication approach?"
multiSelect: false
options:
- label: "JWT tokens (Recommended)"
description: "Stateless, scalable, matches existing API patterns"
- label: "Session cookies"
description: "Simpler, but requires server state"
Ask questions BEFORE writing the plan, not during execution.
Use the native Explore agent with "very thorough" setting to survey the codebase:
For complex features (touching 5+ files), dispatch code-architect in background:
Task(subagent_type: 'dev-workflow:code-architect',
prompt: 'Design architecture for [feature]. Focus: minimal changes.',
run_in_background: true)
Continue exploring while architect works. Retrieve results when ready:
TaskOutput(task_id: '<architect-task-id>', block: true)
Apply pragmatic-architecture principles:
| Principle | Rule |
|---|---|
| Rule of Three | Abstract only after 3+ occurrences |
| YAGNI | Design for today, not hypothetical futures |
| AHA | Prefer duplication over wrong abstraction |
| Colocation | Keep related code together, ≤3 files per feature |
Each task MUST be self-contained with explicit TDD instructions (teammates may not have skill context):
### Task N: [Component Name]
**Files:**
- Create: `exact/path/to/new.py`
- Modify: `exact/path/to/existing.py:50-75`
- Test: `tests/exact/path/test.py`
**TDD Instructions (MANDATORY - follow exactly):**
1. **Write test FIRST** - Before ANY implementation:
```python
def test_behavior():
# Arrange
input_data = {...}
# Act
result = function(input_data)
# Assert
assert result == expected
```
2. **Run test, verify FAILURE:**
```bash
pytest tests/path -v
```
Expected: FAIL (proves test catches missing behavior)
3. **Implement MINIMAL code** - Only enough to pass the test, nothing extra
4. **Run test, verify PASS:**
```bash
pytest tests/path -v
```
Expected: PASS (all tests green)
5. **Commit with conventional format:**
```bash
git add -A && git commit -m "feat(scope): description"
```
**Why TDD order matters:** Writing tests after implementation doesn't prove the test catches bugs - it only proves the test matches what was written.
Tasks with NO file overlap execute in parallel (3-5 per group):
| Task Group | Tasks | Rationale |
|---|---|---|
| Group 1 | 1, 2, 3 | Independent modules, no file overlap |
| Group 2 | 4, 5 | Both touch shared types, must be serial |
| Group 3 | 6, 7, 8 | Independent tests, no file overlap |
Always include "Code Review" as the final task.
Use ExitPlanMode(launchSwarm: true, teammateCount: N) to spawn parallel teammates.
Calculate teammateCount:
| Independent Groups | teammateCount |
|---|---|
| 1-2 groups | 2 |
| 3-4 groups | 3-4 |
| 5+ groups | 5 (max) |
Execution behavior:
State is managed by TodoWrite. Tasks are tracked as pending/in_progress/completed in the todo list.
The plan file in docs/plans/ is the source of truth for task definitions.
If session ends unexpectedly:
/dev-workflow:execute-plan [plan-file]Commands:
After all tasks complete, the orchestrator must:
Code Review - Dispatch code-reviewer agent:
Task:
subagent_type: dev-workflow:code-reviewer
model: opus
run_in_background: true
description: "Review all changes"
prompt: "Review changes from plan execution. git diff main..HEAD"
Wait for results:
TaskOutput:
task_id: <code-reviewer-task-id>
block: true
Process Feedback - Use Skill("dev-workflow:receiving-code-review")
Finish Branch - Use Skill("dev-workflow:finishing-a-development-branch")
These steps are MANDATORY after swarm execution.
When user has an existing plan file (from brainstorm, previous session, or external source):
Read the plan file
Verify it has Task format: "### Task N: [Name]"
Check for TDD instructions in each task
Use AskUserQuestion to confirm execution:
AskUserQuestion:
header: "Execution"
question: "How should I execute this plan?"
multiSelect: false
options:
- label: "Sequential (Recommended)"
description: "Execute tasks one by one with full TDD cycle"
- label: "Parallel via swarm"
description: "Enter plan mode, adapt plan, launch swarm"
- label: "Review first"
description: "Let me review and suggest improvements"
For each task in order:
in_progresscompleted, move to nextAfter all tasks:
EnterPlanModeExitPlanMode(launchSwarm: true)This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.