Systematic task execution with checkpoint validation, progress tracking, and quality gates
/plugin marketplace add athola/claude-night-market/plugin install attune@claude-night-marketThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Execute implementation plan systematically with checkpoints, validation, and progress tracking.
With superpowers:
Skill(superpowers:executing-plans) for systematic executionSkill(superpowers:systematic-debugging) for issue resolutionSkill(superpowers:verification-before-completion) for validationSkill(superpowers:test-driven-development) for TDD workflowWithout superpowers:
Actions:
Validation:
For each task in dependency order:
1. PRE-TASK
- Verify dependencies complete
- Review acceptance criteria
- Create feature branch (optional)
- Set up task context
2. IMPLEMENT (TDD Cycle)
- Write failing test (RED)
- Implement minimal code (GREEN)
- Refactor for quality (REFACTOR)
- Repeat until all criteria met
3. VALIDATE
- All tests passing?
- All acceptance criteria met?
- Code quality checks pass?
- Documentation updated?
4. CHECKPOINT
- Mark task complete
- Update execution state
- Report progress
- Identify blockers
Verification: Run pytest -v to verify tests pass.
Actions:
RED Phase:
# Write test that fails
def test_user_authentication():
user = authenticate("user@example.com", "password")
assert user.is_authenticated
# Run test → FAILS (feature not implemented)
Verification: Run pytest -v to verify tests pass.
GREEN Phase:
# Implement minimal code to pass
def authenticate(email, password):
# Simplest implementation
user = User.find_by_email(email)
if user and user.check_password(password):
user.is_authenticated = True
return user
return None
# Run test → PASSES
Verification: Run pytest -v to verify tests pass.
REFACTOR Phase:
# Improve code quality
def authenticate(email: str, password: str) -> Optional[User]:
"""Authenticate user with email and password."""
user = User.find_by_email(email)
if user is None:
return None
if not user.check_password(password):
return None
user.mark_authenticated()
return user
# Run test → STILL PASSES
Verification: Run pytest -v to verify tests pass.
Quality Gates:
- [ ] All acceptance criteria met
- [ ] All tests passing (unit + integration)
- [ ] Code linted (no warnings)
- [ ] Type checking passes (if applicable)
- [ ] Documentation updated
- [ ] No regression in other components
Verification: Run pytest -v to verify tests pass.
Automated Checks:
# Run quality gates
make lint # Linting passes
make typecheck # Type checking passes
make test # All tests pass
make coverage # Coverage threshold met
Verification: Run pytest -v to verify tests pass.
Save to .attune/execution-state.json:
{
"plan_file": "docs/implementation-plan.md",
"started_at": "2026-01-02T10:00:00Z",
"last_checkpoint": "2026-01-02T14:30:22Z",
"current_sprint": "Sprint 1",
"current_phase": "Phase 1",
"tasks": {
"TASK-001": {
"status": "complete",
"started_at": "2026-01-02T10:05:00Z",
"completed_at": "2026-01-02T10:50:00Z",
"duration_minutes": 45,
"acceptance_criteria_met": true,
"tests_passing": true
},
"TASK-002": {
"status": "in_progress",
"started_at": "2026-01-02T14:00:00Z",
"progress_percent": 60,
"blocker": null
}
},
"metrics": {
"tasks_complete": 15,
"tasks_total": 40,
"completion_percent": 37.5,
"velocity_tasks_per_day": 3.2,
"estimated_completion_date": "2026-02-15"
},
"blockers": []
}
Verification: Run pytest -v to verify tests pass.
Daily Standup:
# Daily Standup - [Date]
## Yesterday
- ✅ [Task] ([duration])
- ✅ [Task] ([duration])
## Today
- 🔄 [Task] ([progress]%)
- 📋 [Task] (planned)
## Blockers
- [Blocker] or None
## Metrics
- Sprint progress: [X/Y] tasks ([%]%)
- [Status message]
Verification: Run the command with --help flag to verify availability.
Sprint Report:
# Sprint [N] Progress Report
**Dates**: [Start] - [End]
**Goal**: [Sprint objective]
## Completed ([X] tasks)
- [Task list]
## In Progress ([Y] tasks)
- [Task] ([progress]%)
## Blocked ([Z] tasks)
- [Task]: [Blocker description]
## Burndown
- Day 1: [N] tasks remaining
- Day 5: [M] tasks remaining ([status])
- Estimated completion: [Date] ([delta])
## Risks
- [Risk] or None identified
Verification: Run the command with --help flag to verify availability.
Common Blockers:
When blocked, apply debugging framework:
When to escalate:
Escalation format:
## Blocker: [TASK-XXX] - [Issue]
**Symptom**: [What's happening]
**Impact**: [Which tasks/timeline affected]
**Attempted Solutions**:
1. [Solution 1] - [Result]
2. [Solution 2] - [Result]
**Recommendation**: [Proposed path forward]
**Decision Needed**: [What needs to be decided]
Verification: Run the command with --help flag to verify availability.
Task is complete when:
Test Pyramid:
**Verification:** Run `pytest -v` to verify tests pass.
/\
/E2E\ Few, slow, expensive
/------\
/ INT \ Some, moderate speed
/----------\
/ UNIT \ Many, fast, cheap
Verification: Run the command with --help flag to verify availability.
Per Task:
Track daily:
Formulas:
**Verification:** Run `pytest -v` to verify tests pass.
Velocity = Tasks completed / Days elapsed
Estimated completion = Tasks remaining / Velocity
On track? = Estimated completion <= Sprint end date
Verification: Run the command with --help flag to verify availability.
If ahead of schedule:
If behind schedule:
Skill(superpowers:executing-plans) - Execution framework (if available)Skill(superpowers:systematic-debugging) - Debugging (if available)Skill(superpowers:test-driven-development) - TDD (if available)Skill(superpowers:verification-before-completion) - Validation (if available)Agent(attune:project-implementer) - Task execution agent/attune:execute - Invoke this skill/attune:execute --task [ID] - Execute specific task/attune:execute --resume - Resume from checkpointSee /attune:execute command documentation for complete examples.
Command not found Ensure all dependencies are installed and in PATH
Permission errors Check file permissions and run with appropriate privileges
Unexpected behavior
Enable verbose logging with --verbose flag
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.