Split oversized task into smaller tasks with proper dependency management
Splits oversized tasks into smaller subtasks with dependency management.
/plugin marketplace add cowwoc/claude-code-dog/plugin install dog@claude-code-dogThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Break down a task that is too large for a single context window into smaller, manageable subtasks. This is essential for DOG's proactive context management, allowing work to continue efficiently when a task exceeds safe context bounds.
TASK_DIR=".claude/dog/tasks/${MAJOR}.${MINOR}-${TASK_NAME}"
# Read current PLAN.md
cat "${TASK_DIR}/PLAN.md"
# Read STATE.md for progress
cat "${TASK_DIR}/STATE.md"
# If subagent exists, check its progress
if [ -d ".worktrees/${TASK}-sub-${UUID}" ]; then
# Review commits made
cd ".worktrees/${TASK}-sub-${UUID}"
git log --oneline origin/HEAD..HEAD
fi
Analyze PLAN.md for natural boundaries:
Good split points:
Poor split points:
# Original task: 1.2-implement-parser
# New tasks: 1.2a, 1.2b, 1.2c (or 1.2.1, 1.2.2, 1.2.3)
# Create directories for new tasks
mkdir -p ".claude/dog/tasks/1.2a-parser-lexer"
mkdir -p ".claude/dog/tasks/1.2b-parser-ast"
mkdir -p ".claude/dog/tasks/1.2c-parser-semantic"
Each new task gets its own focused PLAN.md:
# 1.2a-parser-lexer/PLAN.md
---
task: 1.2a-parser-lexer
parent: 1.2-implement-parser
sequence: 1 of 3
---
# Implement Parser Lexer
## Objective
Implement the lexical analysis phase of the parser.
## Scope
- Token definitions
- Lexer implementation
- Lexer unit tests
## Dependencies
- None (first in sequence)
## Deliverables
- src/parser/Token.java
- src/parser/Lexer.java
- test/parser/LexerTest.java
# Dependency graph
dependencies:
1.2a-parser-lexer: [] # No dependencies
1.2b-parser-ast:
- 1.2a-parser-lexer # Depends on lexer
1.2c-parser-semantic:
- 1.2b-parser-ast # Depends on AST
Original task STATE.md:
# 1.2-implement-parser/STATE.md
status: decomposed
decomposed_at: 2026-01-10T16:00:00Z
reason: "Task exceeded context threshold (85K tokens used)"
decomposed_into:
- 1.2a-parser-lexer
- 1.2b-parser-ast
- 1.2c-parser-semantic
progress_preserved:
- Lexer implementation 80% complete in subagent work
- Will be merged to 1.2a branch
New task STATE.md:
# 1.2a-parser-lexer/STATE.md
status: ready
created_from: 1.2-implement-parser
inherits_progress: true # Will receive merge from parent subagent
dependencies: []
If decomposing due to subagent context limits:
# Collect partial results from subagent
collect-results "${SUBAGENT_ID}"
# Determine which new task inherits the work
# Usually the first or most complete component
# Merge subagent work to appropriate new task branch
git checkout "1.2a-parser-lexer"
git merge "${SUBAGENT_BRANCH}" -m "Inherit partial progress from decomposed parent"
# Update original PLAN.md
echo "---
status: DECOMPOSED
decomposed_into: [1.2a, 1.2b, 1.2c]
---" >> "${TASK_DIR}/PLAN.md"
When analyzing requirements reveals a task is too large:
# Original task seemed manageable
task: 1.5-implement-authentication
# Analysis reveals scope
components:
- User model and repository
- Password hashing service
- JWT token generation
- Login/logout endpoints
- Session management
- Password reset flow
- Email verification
# Too many components - decompose before starting
decompose_to:
- 1.5a-auth-user-model
- 1.5b-auth-password-service
- 1.5c-auth-jwt-tokens
- 1.5d-auth-endpoints
- 1.5e-auth-sessions
- 1.5f-auth-password-reset
- 1.5g-auth-email-verify
When subagent hits context limits:
decomposition_trigger:
task: 1.3-implement-formatter
subagent_tokens: 85000
compaction_events: 1
completed_work:
- Basic formatter structure
- Indentation handling
remaining_work:
- Line wrapping
- Comment formatting
- Multi-line string handling
decomposition_result:
- task: 1.3a-formatter-core
inherits: subagent work
status: nearly_complete
- task: 1.3b-formatter-wrapping
status: ready
- task: 1.3c-formatter-comments
status: ready
When subagent is stuck or confused:
emergency_decomposition:
trigger: "Subagent making no progress for 30+ minutes"
analysis: |
Task scope unclear, subagent attempting multiple
approaches without success.
action:
- Collect any usable partial work
- Re-analyze requirements
- Create smaller, more specific tasks
- Add explicit acceptance criteria to each
# ❌ Splitting at arbitrary points
1.2a: "Lines 1-100 of Parser.java"
1.2b: "Lines 101-200 of Parser.java"
# ✅ Split at logical boundaries
1.2a: "Lexer component"
1.2b: "AST builder component"
# ❌ Treating all subtasks as independent
1.2a-parser-lexer: []
1.2b-parser-ast: [] # Actually needs lexer!
1.2c-parser-semantic: [] # Actually needs AST!
# ✅ Model actual dependencies
1.2a-parser-lexer: []
1.2b-parser-ast: [1.2a]
1.2c-parser-semantic: [1.2b]
# ❌ Starting fresh after decomposition
decompose_task "1.2-parser"
# Subagent work discarded!
# ✅ Preserve progress
collect_results "${SUBAGENT}"
decompose_task "1.2-parser"
merge_to_appropriate_subtask "${SUBAGENT_WORK}"
# ❌ Too granular
1.2a: "Define Token class"
1.2b: "Define TokenType enum"
1.2c: "Implement nextToken method"
1.2d: "Implement peek method"
# ...20 more tiny tasks
# ✅ Meaningful chunks
1.2a: "Implement Lexer (tokens, types, core methods)"
1.2b: "Implement Parser (AST, expressions, statements)"
# ❌ Create subtasks, forget to track
mkdir 1.2a 1.2b 1.2c
# Parent doesn't know about them!
# ✅ Full state update
create_subtasks "1.2a" "1.2b" "1.2c"
update_parent_state "decomposed" "1.2a,1.2b,1.2c"
update_orchestration_plan
dog:token-report - Triggers decomposition decisionsdog:collect-results - Preserves progress before decompositiondog:spawn-subagent - Launches work on decomposed tasksdog:parallel-execute - Can run independent subtasks concurrentlyThis skill should be used when the user asks to "create a hookify rule", "write a hook rule", "configure hookify", "add a hookify rule", or needs guidance on hookify rule syntax and patterns.
Create distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.