From hoyeon
Drives iterative task completion: proposes verifiable Definition of Done criteria, confirms with user, then autonomously loops via stop-hook re-injection until all items verified. For multi-step coding tasks.
npx claudepluginhub team-attention/hoyeon --plugin hoyeonThis skill is limited to using the following tools:
Iterative task completion loop driven by a user-confirmed Definition of Done. Combines the Ralph Wiggum technique (prompt re-injection on stop) with DoD-based independent verification.
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Iterative task completion loop driven by a user-confirmed Definition of Done. Combines the Ralph Wiggum technique (prompt re-injection on stop) with DoD-based independent verification.
How it works:
Build a Definition of Done through interactive confirmation before starting work.
Read the user's request carefully. Based on the task, propose 3–7 concrete, verifiable DoD criteria.
Good criteria (binary, independently verifiable):
npm test exits 0)"parseConfig() exists in src/config.ts"tsc --noEmit exits 0)"Bad criteria (vague, subjective):
eslint . exits 0)"Present as a numbered markdown checklist:
Based on your request, here's my proposed Definition of Done:
1. [concrete criterion 1]
2. [concrete criterion 2]
3. [concrete criterion 3]
...
Each item will be independently verified before the task is considered complete.
Use AskUserQuestion to confirm:
"Here are the proposed DoD criteria. You can:
- Accept as-is
- Add criteria (tell me what to add)
- Remove criteria (tell me which to remove)
- Modify criteria (tell me what to change)
Also, set max iterations (default: 10) if you want to limit the loop."
Loop until the user accepts. Parse their response for:
After user confirms, initialize the loop state and write the DoD file.
Write DoD file — create the checklist as a markdown file:
Bash: SESSION_ID="[session ID from hook]" && mkdir -p "$HOME/.hoyeon/$SESSION_ID/files" && cat > "$HOME/.hoyeon/$SESSION_ID/files/ralph-dod.md" << 'DODEOF'
# Definition of Done
- [ ] [criterion 1]
- [ ] [criterion 2]
- [ ] [criterion 3]
...
DODEOF
Write state — store the original prompt and configuration for the Stop hook:
Bash: SESSION_ID="[session ID from hook]" && PROMPT=$(cat << 'PROMPTEOF'
[The user's ORIGINAL request/prompt — exactly as they typed it, before any processing]
PROMPTEOF
) && hoyeon-cli session set --sid "$SESSION_ID" --json "$(jq -n \
--arg prompt "$PROMPT" \
--arg dod_file "$HOME/.hoyeon/$SESSION_ID/files/ralph-dod.md" \
--arg created_at "$(date -u +%Y-%m-%dT%H:%M:%SZ)" \
'{ralph: {prompt: $prompt, iteration: 0, max_iterations: 10, dod_file: $dod_file, created_at: $created_at}}')"
Replace max_iterations: 10 with the user's chosen value if they specified one.
Display confirmation:
## Ralph Loop Initialized
**Task**: [summary of what you'll do]
**DoD**: [N] criteria
**Max iterations**: [max_iterations]
Starting work. The loop will verify each DoD item independently before allowing completion.
Now do the actual work. Focus on completing the task to satisfy all DoD criteria.
Rules during work:
ralph-dod.md) — it's guarded by the systemWhen you believe the work is complete, simply finish your response normally. The Stop hook will:
reason, list remaining items in systemMessageOn re-entry (after Stop hook blocks):
systemMessage will instruct you to spawn a ralph-verifier agentVerification via separate agent (context isolation):
ralph-verifier agent with subagent_type="ralph-verifier" in FOREGROUND (do NOT use run_in_background=true)
PASS items → change - [ ] to - [x] in the DoD fileFAIL items → fix the underlying issue in this iterationWhy a separate agent? The agent that wrote the code should NOT verify its own work. The verifier agent starts clean, reads actual files/tests, and judges objectively.