npx claudepluginhub phj128/autoworkerAuto-loop execution workflow with quality gates for Claude Code. Automatically decomposes tasks, implements code, runs tests, and iterates through quality gates until completion.
Claude Code marketplace entries for the plugin-safe Antigravity Awesome Skills library and its compatible editorial bundles.
Production-ready workflow orchestration with 80 focused plugins, 185 specialized agents, and 153 skills - optimized for granular installation and minimal token usage
Directory of popular Claude Code extensions including development tools, productivity plugins, and MCP integrations
Share bugs, ideas, or general feedback.
An auto-loop execution workflow with quality gates for Claude Code.
Give Claude a task. Autoworker decomposes it, implements code, runs tests, and iterates through quality gates — autonomously looping until the job is done right.
The problem: Claude often claims "done" before truly verifying. It skips tests, forgets edge cases, and moves on. Autoworker fixes this by enforcing a state machine that won't let Claude say "done" until it actually passes quality checks.
Without Autoworker, Claude Code:
/clear with no recoveryWith Autoworker:
/clear won't erase progress# Add the marketplace
/plugin marketplace add phj128/autoworker
# Install the plugin
/plugin install autoworker@autoworker
You: I need to add authentication to my Express app
Claude: [enters Plan Mode]
→ /autoworker triggers autoworker:deep-plan
→ 5-phase structured discussion (motivation, assumptions, design, acceptance criteria, plan)
→ Produces a plan file
You: /clear
Claude: [sees plan, enters execution]
→ autoworker:subtask-init → autoworker:subtask-plan → autoworker:dispatch
→ Autonomous loop until gate-check PASS
You: /autoworker
Add a retry mechanism to the API client with exponential backoff
Claude: → Creates subtask → Builds verification plan → Implements → Tests → Gates → Done
Autoworker runs as a state machine. Each skill reads the current state, does its job, and hands off to the next. No step can be skipped.
┌─────────────────────────────────────┐
│ │
subtask-init ──→ subtask-plan ──→ dispatch ──→ code ──→ checkpoint ──┐
↑ │
│ ┌────────────────────┘
│ ↓
├──── dispatch ──→ test ──→ checkpoint
│ ↓
│ gate-check
│ ↓ ↓
│ FAIL PASS ──→ done ✅
│ ↓
└── subtask-update
| Skill | Role | What happens |
|---|---|---|
| deep-plan | Planning | 5-phase structured discussion: motivation → assumptions → design → acceptance criteria → plan output |
| subtask-init | Setup | Creates subtask document with goals, assumptions (verified by running commands), and acceptance criteria |
| subtask-plan | Verification design | Builds L1-L4 test plan, traces each acceptance criterion to a test, checks coverage |
| dispatch | Router | Reads subtask checkboxes, routes to the right next step. The only routing point — prevents skipping |
| code | Implementation | Implements one phase of code, following the subtask plan step by step |
| test | Verification | Executes one test layer (L1/L2/L3/L4), records actual output vs expected |
| checkpoint | Record keeping | Checks off completed phases, writes test results to subtask |
| gate-check | Quality gate | Traces every acceptance criterion to test results, assesses confidence, triggers re-work if < 95% |
| subtask-update | Fix & retry | When gate-check fails, adds remediation steps and feeds back into the loop |
| sync-docs | Documentation | Syncs progress, findings, and archives completed work |
| Layer | Verifies | Example | Required? |
|---|---|---|---|
| L1 Build | Code compiles/parses | pnpm build, bash -n *.sh | Yes |
| L2 Unit | Individual function logic | Specific function call + expected output | Optional (with justification) |
| L3 Chain | Multi-module data flow | Feed real upstream output to downstream | Optional (with justification) |
| L4 End-to-End | Complete user path | Simulate actual user operations | Always required |
The gate-check doesn't just ask "does it work?" — it: