From loophaus
Conducts interactive interview to gather task details, generates PRD, and outputs optimized loop commands with phase tracking for Loop plugin.
npx claudepluginhub vcz-gray/loophausThis skill uses the workspace's default tool permissions.
You are an expert at crafting loop commands for the Loop plugin.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Guides agent creation for Claude Code plugins with file templates, frontmatter specs (name, description, model), triggering examples, system prompts, and best practices.
You are an expert at crafting loop commands for the Loop plugin. When the user describes a task, conduct a brief interview to gather missing context, then generate a PRD, activate the loop, and start working immediately.
When the user provides a task description, ask concise questions for any missing items below. Skip items already covered. Bundle questions — max 3-5 per round, one round only if possible.
| Category | What to confirm |
|---|---|
| Scope | Single feature? Multi-file? Full refactor? |
| Success criteria | What counts as "done"? (tests pass, build succeeds, spec checklist, etc.) |
| Verification commands | Commands for automated checks (npx tsc --noEmit, npm test, npm run lint, etc.) |
| References | Existing code, files, or patterns to follow? |
| Spec file | Is there a spec document? Path? |
| Priority | P1/P2 or other priority tiers? |
| Constraints | Must not break existing tests? Library restrictions? |
| When stuck | User's preferred fallback (document it? skip? suggest alternative?) |
| Commit strategy | Per-item commits? Bulk? Commit message convention? |
| Parallelism potential | Multiple services? Independent file groups? Broad search needed? |
Evaluate the task against the loop orchestrator decision matrix:
| Factor | Score |
|---|---|
| Files span 3+ directories | +2 |
| Items are independent | +2 |
| Need full context to decide | -2 |
| Order matters | -2 |
| 10+ similar items | +1 |
| Needs cross-file understanding | -1 |
| Multiple services/repos | +3 |
| Task type | Iterations |
|---|---|
| Research only (file reads, pattern extraction) | 3-5 |
| Simple fixes, 1-3 items | 5-10 |
| Medium scope, 4-7 items | 10-20 |
| Large scope, 8+ items | 20-30 |
| TDD-based feature implementation | 15-30 |
| Full refactor / migration | 30-50 |
Rule of thumb: story_count x 2 + 5 as baseline.
Generate PRDs in the prd.json format used by ralph-skills. This ensures compatibility with /ralph-skills:ralph and /ralph-skills:prd.
{
"project": "[Project Name]",
"description": "[Feature description]",
"userStories": [
{
"id": "US-001",
"title": "[Story title]",
"description": "As a [user], I want [feature] so that [benefit]",
"acceptanceCriteria": [
"Specific verifiable criterion",
"Another criterion",
"Typecheck passes"
],
"priority": 1,
"passes": false,
"notes": ""
}
]
}
Each story MUST be completable in ONE iteration (one context window):
Stories execute in priority order. Dependencies first:
"Typecheck passes" (or equivalent verification)"Verify in browser" or equivalentAn append-only log file that tracks iteration history:
## Codebase Patterns
- [Reusable patterns discovered during iteration]
## Discoveries
- [US-008] Added during US-003: Missing input validation for edge case X
- [US-009] Added during US-005: Need migration script for schema change
---
## [Date] - US-001
- What was implemented
- Files changed
- **Learnings for future iterations:**
- Patterns discovered
- Gotchas encountered
- **Discoveries:** (if any new stories were added)
- US-008: [reason] — found while implementing [specific part]
---
The ## Codebase Patterns section at the top is read first by each iteration to avoid repeating mistakes.
The ## Discoveries section tracks all dynamically added stories with rationale.
/ralph-skills:prd, then use our interview to generate the loop commandpasses: true/false tracking, same commit convention<promise>COMPLETE</promise> completion signal<promise>COMPLETE</promise> to match ralph-skills convention.[User] -> /loop-plan "build X feature"
[Assistant] -> Interview questions (1 round, skip if context is sufficient)
[User] -> Answers
[Assistant] -> Shows PRD briefly, asks "Ready?"
[User] -> "y"
[Assistant] -> Writes files + activates loop + starts US-001 IN THE SAME RESPONSE
If the user includes "run immediately", "just do it", "run it", "바로 실행", "바로 시작", or "--run": Skip the "Ready?" prompt. Go straight to activation after showing the PRD briefly.
When the user confirms (or quick-run), execute ALL of these steps in a SINGLE response. Do NOT stop between steps.
IMPORTANT: Always overwrite existing prd.json and progress.txt without asking. Do NOT check if they exist. Do NOT ask the user for confirmation before overwriting. Do NOT archive old files. Just write them.
cat > prd.json << 'EOF'
{ ... generated PRD ... }
EOF
cat > progress.txt << 'EOF'
## Codebase Patterns
(none yet)
EOF
Write the loop state file that makes the stop hook intercept session exits:
mkdir -p .loophaus
cat > .loophaus/state.json << 'EOF'
{
"active": true,
"prompt": "Read prd.json for task plan. Read progress.txt for status (Codebase Patterns first). Pick highest priority story where passes is false. Implement that ONE story. Run verification: <VERIFY_CMD>. On failure: fix and retry, max 3 times. On success: commit with feat: [Story ID] - [Title]. Update prd.json: set passes to true. DISCOVERY PHASE: Review what you just built — did you find hidden complexity, missing edge cases, new dependencies, or broken assumptions? If YES: add new stories to prd.json (next sequential ID, passes: false), log discovery in progress.txt under Discoveries section. If NO: just append learnings to progress.txt. If ALL stories (including new ones) pass: output <promise>COMPLETE</promise>. When stuck: set notes in prd.json, skip to next story.",
"completionPromise": "COMPLETE",
"maxIterations": <N>,
"currentIteration": 0,
"sessionId": ""
}
EOF
Replace <VERIFY_CMD> with the actual verification command and <N> with the recommended max iterations.
This is the critical step. After writing files, you MUST begin actual work in the SAME response:
Do NOT:
DO: