npx claudepluginhub Primadetaautomation/primadata-marketplace --plugin claude-dev-toolkitWant just this skill?
Then install: npx claudepluginhub u/[userId]/[slug]
**Purpose:** Context-efficient parallel execution with fresh agent contexts.
This skill uses the workspace's default tool permissions.
commands/execute.mdcommands/init.mdcommands/plan.mdcommands/status.mdWave Execution Skill
Purpose: Context-efficient parallel execution with fresh agent contexts.
Implements 4 GSD patterns:
- Context rot prevention (orchestrator ~15%, agents 100% fresh)
- Wave-based parallelism (automatic dependency grouping)
- Structured state (STATE.md <100 lines)
- Atomic commits (1 task = 1 commit)
Quick Start
/wave:init # Initialize STATE.md for current project
/wave:plan <tasks> # Analyze tasks, group into waves
/wave:execute # Execute waves with parallel agents
/wave:status # Show current state
1. Context Rot Prevention
The Problem
As context fills, Claude's output quality degrades. Long sessions = worse code.
The Solution
Orchestrator: ~15% context (~30K on 200K model)
└── Spawns fresh agents per task
└── Each agent: 100% fresh 200K context
Orchestrator Rules
LOAD ONLY:
- STATE.md (current position, <100 lines)
- Task definitions (what to execute)
- Dependency graph (what order)
NEVER LOAD:
- Full codebase
- Historical context
- Previous execution details
DELEGATE EVERYTHING:
- Code reading → subagent
- Implementation → subagent
- Testing → subagent
- Each gets fresh context
Agent Spawn Pattern
// Orchestrator spawns with MINIMAL context
Task({
prompt: `
<objective>Execute: ${task.name}</objective>
<context>
@STATE.md
@${task.planFile}
</context>
<success_criteria>${task.criteria}</success_criteria>
<on_complete>
1. Commit changes: git commit -m "${task.commitMessage}"
2. Return: TASK COMPLETE + files changed
</on_complete>
`,
subagent_type: task.agentType || "senior-fullstack-developer"
})
2. Wave-Based Parallelism
Concept
Group independent tasks into waves. Execute wave in parallel. Next wave after completion.
Wave 1: [Task A, Task B, Task C] ← No dependencies, run parallel
↓ wait for all
Wave 2: [Task D, Task E] ← Depend on Wave 1, run parallel
↓ wait for all
Wave 3: [Task F] ← Depends on D and E
Dependency Analysis
tasks:
- id: setup-db
wave: 1
depends: []
- id: create-api
wave: 1
depends: []
- id: add-auth
wave: 2
depends: [setup-db, create-api]
- id: write-tests
wave: 3
depends: [add-auth]
Parallel Execution Pattern
// Execute all Wave 1 tasks in SINGLE message with multiple Task calls
// This runs them truly parallel
for (const wave of waves) {
// Spawn ALL tasks in wave simultaneously
const tasks = wave.tasks.map(task =>
Task({
prompt: fillTemplate(task),
subagent_type: selectAgent(task)
})
);
// All tasks in same message = parallel execution
// Wait for all to complete before next wave
}
Wave Assignment Rules
| Condition | Wave |
|---|---|
| No dependencies | Wave 1 |
| Depends on Wave N tasks | Wave N+1 |
| Depends on multiple waves | Max(dependency waves) + 1 |
3. Structured State (STATE.md)
Purpose
Session continuity without context bloat. Always <100 lines.
Template
# Project State
## Quick Reference
**Project:** [name]
**Core goal:** [one line]
**Current:** Phase [X], Task [Y]
**Last:** [YYYY-MM-DD] - [what happened]
## Position
- Phase: [X] of [N] - [phase name]
- Task: [Y] of [M] - [task name]
- Status: [planning|executing|blocked|complete]
- Progress: [████░░░░░░] 40%
## Active Context
**Current wave:** [N]
**In progress:** [task names]
**Blocked by:** [none | blocker description]
## Decisions (recent 5)
| Date | Decision | Rationale |
|------|----------|-----------|
| ... | ... | ... |
## Deferred
- [ ] [ISS-001] [description] (from phase X)
## Resume
**Last session:** [timestamp]
**Stopped at:** [description]
**Continue with:** [next action]
Size Rules
| Section | Max Lines |
|---|---|
| Quick Reference | 5 |
| Position | 6 |
| Active Context | 5 |
| Decisions | 10 (keep recent 5) |
| Deferred | 10 (reference ISSUES.md if more) |
| Resume | 5 |
| TOTAL | <50 lines ideal, <100 max |
Update Triggers
| Event | Update |
|---|---|
| Task complete | Position, Progress |
| Decision made | Add to Decisions (trim old) |
| Issue found | Add to Deferred |
| Session end | Resume section |
| Phase complete | Clear completed, update Position |
4. Atomic Commits
Rule
1 Task = 1 Commit. No exceptions.
Commit Format
<type>(<scope>): <description>
- [x] <task that was completed>
Files: <list of changed files>
Types
| Type | When |
|---|---|
| feat | New functionality |
| fix | Bug fix |
| refactor | Code restructure |
| test | Add/update tests |
| docs | Documentation |
| chore | Maintenance |
Examples
# Good - atomic
git commit -m "feat(auth): add JWT token validation
- [x] Implement token verification middleware
Files: src/middleware/auth.ts, src/utils/jwt.ts"
# Bad - multiple tasks
git commit -m "add auth and fix bugs and update tests"
Agent Commit Instructions
Include in every agent prompt:
<on_complete>
After completing the task:
1. Stage only files for THIS task
2. Commit with format: <type>(<scope>): <task description>
3. Include task checkbox and file list in commit body
4. Return commit hash in completion message
</on_complete>
Integration with Existing Setup
With Decision Logs (GitHub Issues)
STATE.md = Fast session state (local, <100 lines)
Decision Log = Permanent record (GitHub, detailed)
Flow:
1. Session start → Read STATE.md (instant context)
2. Session work → Update STATE.md after each task
3. Session end → Sync important decisions to GitHub log
With agent-context-config.json
{
"wave-orchestrator": {
"defaultContext": "32K", // Stays lean
"maxAutoEscalation": "64K", // Never needs more
"specialRules": {
"orchestration": "32K" // Force minimal
}
}
}
With Specialized Agents
Wave execution uses YOUR agent types:
senior-fullstack-developerfor implementationqa-testing-engineerfor testsbackend-specialistfor API work- etc.
NOT generic "general-purpose" like GSD.
Commands
/wave:init
Initialize STATE.md for current project.
/wave:plan
/wave:plan "
1. Setup database schema
2. Create API endpoints
3. Add authentication (needs 1,2)
4. Write integration tests (needs 3)
"
Output: Wave assignment + dependency graph
/wave:execute
Execute planned waves with parallel agents.
/wave:status
Show STATE.md content + execution progress.
Example Workflow
# 1. Initialize state
/wave:init
# 2. Plan tasks with dependencies
/wave:plan "
- Create user table [db]
- Create product table [db]
- User API endpoints [api, needs: user table]
- Product API endpoints [api, needs: product table]
- Auth middleware [auth, needs: user api]
- Integration tests [test, needs: auth]
"
# Output:
# Wave 1: [user-table, product-table] ← parallel
# Wave 2: [user-api, product-api] ← parallel
# Wave 3: [auth-middleware]
# Wave 4: [integration-tests]
# 3. Execute
/wave:execute
# Orchestrator spawns Wave 1 tasks parallel
# Waits for completion
# Spawns Wave 2 tasks parallel
# ... continues until done
5. Two-Stage Review (NEW)
Every task completion goes through TWO review stages before proceeding.
Stage 1: Spec Compliance Review
Purpose: Catch over-building and spec misses.
Questions:
- Does the code do EXACTLY what was specified?
- Is there code that wasn't asked for?
- Are there missing requirements?
Reviewer prompt:
You are a SKEPTICAL spec compliance reviewer.
Compare the implementation against the original task spec:
- Task: {task_description}
- Files changed: {files}
Check for:
1. MISSING: Required functionality not implemented
2. EXTRA: Code added that wasn't requested (YAGNI violation)
3. WRONG: Implementation doesn't match spec
Be STRICT. If anything is off, report it.
Respond with:
- PASS: Spec fully met, nothing extra
- FAIL: [List specific issues]
Stage 2: Code Quality Review
Purpose: Ensure code is maintainable and tested.
Only runs if Stage 1 passes.
Reviewer prompt:
You are a SENIOR code quality reviewer.
Review for:
1. CODE CLARITY: Readable? Good naming? Comments where needed?
2. TEST COVERAGE: Are there tests? Do they test behavior not implementation?
3. ERROR HANDLING: Graceful failures? No silent errors?
4. PERFORMANCE: Any obvious bottlenecks?
5. SECURITY: Input validation? No hardcoded secrets?
Respond with:
- PASS: Code is production-ready
- FAIL: [List specific issues with severity]
Review Loop
Task Complete
↓
Stage 1: Spec Compliance
↓
FAIL? → Return to implementer → Fix → Re-review
↓
PASS
↓
Stage 2: Code Quality
↓
FAIL? → Return to implementer → Fix → Re-review
↓
PASS
↓
Commit & Proceed to Next Task
Implementation in Wave Execution
// After task agent returns
const specReview = await Task({
prompt: `Review for spec compliance: ${task.spec} vs ${result.changes}`,
subagent_type: "qa-testing-engineer"
});
if (specReview.verdict !== 'PASS') {
// Send back to implementer with feedback
return retryTask(task, specReview.feedback);
}
const qualityReview = await Task({
prompt: `Review code quality: ${result.changes}`,
subagent_type: "qa-testing-engineer"
});
if (qualityReview.verdict !== 'PASS') {
// Send back to implementer with feedback
return retryTask(task, qualityReview.feedback);
}
// Both passed - commit and continue
await commitTask(task, result);
Why Two Stages?
Stage 1 catches:
- Feature creep
- Missing requirements
- Wrong implementations
- Scope changes
Stage 2 catches:
- Poor code quality
- Missing tests
- Security issues
- Performance problems
Single reviewer misses things. Two specialized reviewers catch more.
6. Quality Gates Summary
| Gate | When | What It Checks |
|---|---|---|
| TDD | Before implementation | Test exists and fails |
| Spec Review | After implementation | Matches requirements exactly |
| Quality Review | After spec approval | Code is production-ready |
| Commit | After both reviews | Atomic, conventional format |
All gates must pass. No shortcuts.
Similar Skills
Activates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.