From wrangler
Orchestrates specification implementation through planning, execution, verification, and PR publication phases with session recovery. Use when implementing specifications requiring phased workflow and resumable progress tracking.
npx claudepluginhub bacchus-labs/wrangler --plugin wranglerThis skill uses the workspace's default tool permissions.
Orchestrates specification implementation by invoking existing wrangler skills (writing-plans, implementing-issue) rather than reimplementing their logic. Focuses on workflow coordination, subagent dispatch, and compliance verification.
README.md__tests__/scripts.batsexamples/complex-feature.mdexamples/recovery.mdexamples/simple-feature.mdpackage-lock.jsonpackage.jsonreferences/detailed-guide.mdscripts/generate-pr-description.shscripts/update-pr-description.shscripts/utils/github.shtemplates/complete.mdtemplates/execution.mdtemplates/planning.mdtemplates/verification.mdGuides strict Test-Driven Development (TDD): write failing tests first for features, bugfixes, refactors before any production code. Enforces red-green-refactor cycle.
Guides systematic root cause investigation for bugs, test failures, unexpected behavior, performance issues, and build failures before proposing fixes.
Guides A/B test setup with mandatory gates for hypothesis validation, metrics definition, sample size calculation, and execution readiness checks.
Orchestrates specification implementation by invoking existing wrangler skills (writing-plans, implementing-issue) rather than reimplementing their logic. Focuses on workflow coordination, subagent dispatch, and compliance verification.
.wrangler/specifications/gh CLI installed and authenticatedjq command-line JSON processorThis skill orchestrates existing wrangler skills rather than reimplementing their logic:
Phase 1 (INIT): Uses session_start MCP tool (session tools)
Phase 2 (PLAN): Invokes writing-plans skill to create MCP issues
Phase 3 (REVIEW): Validates planning coverage before execution
Phase 4 (EXECUTE): Dispatches subagents per issue following implementing-issue skill
Phase 5 (VERIFY): LLM-based compliance audit
Phase 6 (PUBLISH): GitHub PR finalization
Benefits of this approach:
INIT → PLAN → REVIEW → EXECUTE → VERIFY → PUBLISH → COMPLETE
Initialize session, create worktree, and establish context.
Create isolated worktree using MCP session tools and verify environment.
Start session - Call session_start MCP tool
session_start(specFile: "{SPEC_FILE}")
Capture response:
SESSION_ID = response.sessionIdWORKTREE_ABSOLUTE = response.worktreePathBRANCH_NAME = response.branchNameAUDIT_PATH = response.auditPathVerify worktree - Ensure on correct branch
cd {WORKTREE_ABSOLUTE} && \
echo "=== WORKTREE VERIFICATION ===" && \
echo "Directory: $(pwd)" && \
echo "Git root: $(git rev-parse --show-toplevel)" && \
echo "Branch: $(git branch --show-current)" && \
test "$(git branch --show-current)" = "{BRANCH_NAME}" && echo "VERIFIED" || echo "FAILED"
If verification fails, STOP and report error.
Log phase complete
session_phase(
sessionId: SESSION_ID,
phase: "init",
status: "complete"
)
Worktree must exist and be on correct branch.
Create implementation plan using writing-plans skill.
Break specification into implementable tasks tracked as MCP issues.
Log phase start
session_phase(sessionId: SESSION_ID, phase: "plan", status: "started")
Invoke writing-plans skill on worktree
Use Task tool to dispatch planning subagent:
Tool: Task
Description: "Create implementation plan for {SPEC_FILE}"
Prompt: |
You are creating an implementation plan for a specification.
## Working Directory Context
**Working directory:** {WORKTREE_ABSOLUTE}
**Branch:** {BRANCH_NAME}
**Session ID:** {SESSION_ID}
## Specification
Read and analyze: {SPEC_FILE}
## Your Job
1. Read the specification file completely
2. Invoke the writing-plans skill to break it down into tasks
3. The writing-plans skill will:
- Create MCP issues for each task (issues_create)
- Optionally create plan file if architecture context needed
- Return list of issue IDs created
4. Return to me:
- Total task count
- List of issue IDs created (e.g., ["ISS-000042", "ISS-000043"])
- Any blockers or clarification needed
IMPORTANT: Let writing-plans handle all planning logic. Your job
is to invoke it and report back the results.
Capture issue IDs
Parse planning subagent response for:
ISSUE_IDS = list of created issue IDsTASK_COUNT = number of tasks createdCreate GitHub PR with overview (not full plan)
cd "{WORKTREE_ABSOLUTE}" && \
gh pr create \
--title "feat: {SPEC_TITLE}" \
--body "Implements {SPEC_FILE}. See .wrangler/issues/ for task details." \
--draft \
--base main \
--head "{BRANCH_NAME}"
Capture:
PR_URL = PR URL from outputPR_NUMBER = PR number from outputLog phase complete
session_phase(
sessionId: SESSION_ID,
phase: "plan",
status: "complete",
metadata: {
issues_created: ISSUE_IDS,
total_tasks: TASK_COUNT,
pr_url: PR_URL,
pr_number: PR_NUMBER
}
)
Save checkpoint
session_checkpoint(
sessionId: SESSION_ID,
tasksCompleted: [],
tasksPending: ISSUE_IDS,
lastAction: "Created implementation plan with {TASK_COUNT} tasks",
resumeInstructions: "Continue with execute phase, issues: {ISSUE_IDS}"
)
Planning succeeds and issues are created. If planning fails or returns blockers, ESCALATE to user.
Why GitHub PR shows overview, not full plan:
Validate planning completeness BEFORE execution starts.
Catch planning gaps early before wasting time on execution.
Log phase start
session_phase(sessionId: SESSION_ID, phase: "review", status: "started")
Read plan file coverage analysis (if plan file was created)
Read .wrangler/plans/YYYY-MM-DD-PLAN_<spec>.md and extract:
Validate coverage
| Coverage | Status |
|---|---|
| >= 95% | ✅ PASS (automatic approval) |
| < 95% | ⚠️ NEEDS ATTENTION (user decision) |
If coverage < 95%:
Present coverage report to user:
REVIEW Phase: Planning Coverage Gap Detected
Coverage: X% (Y/Z acceptance criteria covered)
Missing AC:
- AC-XXX: [description]
- AC-YYY: [description]
...
Options:
a) Auto-create missing tasks (Recommended)
b) Proceed anyway (risks VERIFY failure)
c) Abort and replan from scratch
Your decision?
If auto-create chosen:
For each uncovered AC:
satisfiesAcceptanceCriteria: ["AC-XXX"] metadataUpdate plan file with new tasks and re-calculate coverage.
Update session checkpoint
session_checkpoint(
sessionId: SESSION_ID,
tasksCompleted: [],
tasksPending: ISSUE_IDS, // updated list
lastAction: "REVIEW phase complete, coverage: X%",
resumeInstructions: "Continue with execute phase"
)
Log phase complete
session_phase(
sessionId: SESSION_ID,
phase: "review",
status: "complete",
metadata: {
coverage_percentage: X,
supplemental_tasks_created: N
}
)
Advisory gate - offers to fix gaps, doesn't block arbitrarily:
If spec has no explicit acceptance criteria section:
Implement all tasks by dispatching subagents for each issue.
Execute all implementation tasks with TDD and code review, coordinated directly by this orchestrator.
Log phase start
session_phase(sessionId: SESSION_ID, phase: "execute", status: "started")
Execution loop - For each issue in ISSUE_IDS:
a) Mark issue in progress
issues_update(id: ISSUE_ID, status: "in_progress")
b) Dispatch implementation subagent
Use Task tool to dispatch a subagent that follows the implementing-issue skill:
Tool: Task
Description: "Implement {ISSUE_ID}: {issue_title}"
Prompt: |
You are implementing a single issue.
## Working Directory Context
**Working directory:** {WORKTREE_ABSOLUTE}
**Branch:** {BRANCH_NAME}
### MANDATORY: Verify Location First
Before any work, run:
```bash
cd {WORKTREE_ABSOLUTE} && \
echo "Directory: $(pwd)" && \
echo "Branch: $(git branch --show-current)" && \
test "$(pwd)" = "{WORKTREE_ABSOLUTE}" && echo "VERIFIED" || echo "FAILED - STOP"
```
ALL bash commands MUST use: cd {WORKTREE_ABSOLUTE} && [command]
{ISSUE_ID}: {issue_title}
Read the full issue with: issues_get(id: "{ISSUE_ID}")
Follow the implementing-issue skill:
If requirements are unclear, STOP and report the blocker.
**c) Verify subagent response**
Check for:
- Location verification passed (VERIFIED)
- Implementation summary provided
- All tests passing
- TDD Compliance Certification
- Work committed
**d) Handle blockers**
If subagent reports a blocker:
- Unclear requirements: ESCALATE to user immediately
- Failed after 2 fix attempts: ESCALATE to user
- Git conflicts: ESCALATE to user
- Missing dependencies: Try to resolve, escalate if cannot
**e) Save checkpoint**
session_checkpoint( sessionId: SESSION_ID, tasksCompleted: [...completed_ids], tasksPending: [...remaining_ids], lastAction: "Completed {ISSUE_ID}: {issue_title}", resumeInstructions: "Continue with next issue or proceed to VERIFY if all done" )
**f) Mark issue complete**
issues_mark_complete(id: ISSUE_ID)
Log phase complete
session_phase(
sessionId: SESSION_ID,
phase: "execute",
status: "complete",
metadata: {
tasks_completed: TASK_COUNT,
total_commits: N
}
)
All issues complete. If any issue cannot be completed, escalate to user with blocker details and pause session.
Run compliance audit using LLM-based verification.
Verify all acceptance criteria met using intelligent extraction (not brittle scripts).
Log phase start
session_phase(sessionId: SESSION_ID, phase: "verify", status: "started")
Extract acceptance criteria from spec (LLM-based)
Read specification and use LLM to extract:
Why LLM not script:
Verify each criterion has evidence
For each acceptance criterion:
Run fresh test suite
cd "{WORKTREE_ABSOLUTE}" && npm test 2>&1 | tee ".wrangler/sessions/{SESSION_ID}/final-test-output.txt"
Capture:
TEST_EXIT_CODE = exit codeTESTS_TOTAL = total test countTESTS_PASSED = passing test countCheck git status
cd "{WORKTREE_ABSOLUTE}" && git status --short
Capture:
GIT_CLEAN = true if output is emptyCalculate compliance percentage
COMPLIANCE = (criteria_met / total_criteria) * 100
Log phase complete
session_phase(
sessionId: SESSION_ID,
phase: "verify",
status: "complete",
metadata: {
compliance_percentage: COMPLIANCE,
tests_exit_code: TEST_EXIT_CODE,
tests_total: TESTS_TOTAL,
tests_passed: TESTS_PASSED,
git_clean: GIT_CLEAN
}
)
Goal: Achieve >= 95% compliance through autonomous remediation.
Self-Healing Workflow:
Initial Compliance Audit
Gap Categorization
For each unmet acceptance criterion, classify:
Autonomous Remediation (max 3 iterations)
For AUTOFIX* gaps:
Log each remediation iteration:
session_phase(
sessionId: SESSION_ID,
phase: "verify-remediation",
status: "started",
metadata: {
iteration: N,
gaps_to_fix: GAP_COUNT,
gap_types: ["AUTO_FIX_TEST", ...]
}
)
Quality Gate Decision
| Compliance | Behavior |
|---|---|
| >= 95% | ✅ PASS → Document any gaps-fixed in PR, proceed to PUBLISH |
| 90-94% | ⚠️ PASS WITH WARNINGS → Document minor gaps in PR, proceed to PUBLISH |
| < 90% | ❌ FAIL → Escalate to user with detailed gap analysis |
Escalation Format (if < 90%)
VERIFY Phase: Compliance Below Threshold
Current compliance: X%
Quality gate: 90% required
Self-healing attempted:
- Created and executed Y supplemental tasks
- Fixed Z auto-fixable gaps
- Remaining gaps: W
Gap Analysis:
- FUNDAMENTAL_GAP: [list core requirements not implemented]
- SEARCH_RETRY failures: [list evidence not found after retries]
Options:
1. Review and approve current implementation (partial delivery)
2. Let me create additional tasks for remaining gaps
3. Abort session and revisit planning
Your decision?
Document self-healing in PR (if any remediation occurred)
Append to PR body:
### Compliance Self-Healing
During verification, the following gaps were auto-fixed:
- Created ISS-XXXXX: Added missing test for feature X
- Created ISS-XXXXX: Added documentation for API Y
- Created ISS-XXXXX: Added edge case handling for Z
Final compliance: 96%
Critical Rules:
Also block if:
TEST_EXIT_CODE != 0 - tests failing (cannot proceed)GIT_CLEAN == false - uncommitted changes (cannot proceed)Finalize PR and mark ready for review.
Update PR with final summary and mark ready for merge.
Log phase start
session_phase(sessionId: SESSION_ID, phase: "publish", status: "started")
Generate final PR description
Create comprehensive PR body from audit data:
## Summary
For detailed information, see:
references/detailed-guide.md - Complete workflow details, examples, and troubleshooting