From operator-skills
Generates structured PROMPT.md files for autonomous AI workflows (Ralph Loops) from rough descriptions or PRDs, with phases, validation criteria, completion promises, and blind review.
npx claudepluginhub dazuck/operator-skills --plugin operator-skillsThis skill uses the workspace's default tool permissions.
Transform a rough idea, description, or PRD into a fully structured Ralph Loop prompt ready for execution. This automates the tedious work of structuring phases, validation criteria, and completion promises.
Generates feature specs, implementation plans with task checklists, and project loop infrastructure via interactive user interviews on scope, risk tolerance, and validation.
Initializes Ralph Loop infrastructure from approved Claude plans by creating .ralph/ with prd.json, loop.py, CLAUDE.md for autonomous multi-step execution.
Initiates Ralph Wiggum tasks via Phases 1-2: requirements interview, JTBD decomposition, spec creation in specs/[task-name]/, implementation planning, and git branch setup.
Share bugs, ideas, or general feedback.
Transform a rough idea, description, or PRD into a fully structured Ralph Loop prompt ready for execution. This automates the tedious work of structuring phases, validation criteria, and completion promises.
Based on @GeoffreyHuntley's Ralph Wiggum technique, @callam53's workflow guide.
Use AFTER you have clarity on what to build:
Rough idea → /interview → /ralph-loop-creator → /ralph-loop
↑ ↑
(if spec has (THIS SKILL)
ambiguities) │
├── Generate PROMPT.md
├── Blind Review (subagent)
├── Fix issues if confidence < 7
└── Output ready-to-run command
When invoked with /ralph-loop-creator <input> [output-path]:
Parse input:
Assess completeness:
/interview firstGather context:
Generate PROMPT.md:
Blind Review (critical step):
Output:
/ralph-loop command to runGenerate prompts following this structure:
# [Feature Name] - Ralph Loop Prompt
## Mission
[1-2 sentence description of what we're building]
**Completion Promise:** When all phases complete and [success criteria], output:
\`\`\`
<promise>[FEATURE_NAME] COMPLETE</promise>
\`\`\`
---
## Context
### What Already Exists
[Relevant existing infrastructure, patterns, files]
| Component | Location | Purpose |
| --------- | -------- | -------------- |
| [Name] | [Path] | [What it does] |
### Architecture Overview
\`\`\`
[ASCII diagram if helpful]
\`\`\`
---
## Scope
### In Scope
- [ ] [Feature/requirement 1]
- [ ] [Feature/requirement 2]
### Out of Scope (Future)
- [Thing explicitly deferred]
---
## Phases
Work through these sequentially. Each phase should result in working, tested code.
### Phase 1: [Foundation/Setup]
**Goal:** [What this phase achieves]
**Tasks:**
1. [Specific task]
2. [Specific task]
**Validation:**
- [ ] [Testable criterion]
- [ ] [Testable criterion]
**Reference:** [Docs/links if relevant]
---
### Phase 2: [Core Feature]
[Same structure]
---
### Phase N: Testing & Documentation
**Goal:** Production-ready with tests and docs.
**Tasks:**
1. Unit tests for [components]
2. Integration tests for [flows]
3. Documentation at [location]
**Validation:**
- [ ] Tests passing
- [ ] Coverage > [X]%
- [ ] Docs complete
---
## Key Files to Create
\`\`\`
[File tree of expected output]
\`\`\`
---
## Environment Variables
\`\`\`bash
[Required env vars with descriptions]
\`\`\`
---
## Decision Log
| Decision | Choice | Rationale |
| -------- | -------- | --------- |
| [What] | [Choice] | [Why] |
---
## Success Metrics
**Functional:**
- [Measurable criterion]
**Quality:**
- [Quality gate]
---
## Notes for Implementation
- [Important gotcha]
- [Pattern to follow]
- [Thing to avoid]
---
## When Complete
Output the completion promise:
\`\`\`
<promise>[FEATURE_NAME] COMPLETE</promise>
\`\`\`
Typical flow:
Each phase needs checkboxes that are:
Good:
- [ ] `npm test` passes with 0 failures
- [ ] Bot joins test meeting within 10 seconds
- [ ] Response latency < 2s measured in logs
Bad:
- [ ] Code is clean
- [ ] Feature works well
- [ ] Good performance
Suggest max iterations based on complexity:
| Phases | Complexity | Suggested Iterations |
|---|---|---|
| 2-3 | Simple | 15-20 |
| 4-5 | Medium | 30-40 |
| 6-8 | Complex | 50-75 |
| 8+ | Large | 100+ (consider splitting) |
After generating PROMPT.md and running blind review, provide:
## Ralph Loop Ready
Created: `[path/to/PROMPT.md]`
### Blind Review Results
**Confidence:** [X]/10
**Key findings:** [1-2 sentence summary of what reviewer caught]
**Changes made:** [What you fixed based on review, or "None needed"]
### To Run
\`\`\`
/ralph-loop [path/to/PROMPT.md] --max-iterations [N] --completion-promise "[PROMISE]"
\`\`\`
**Estimated phases:** [N]
**Suggested iterations:** [N]
**Completion promise:** [PROMISE]
Include these patterns in generated prompts:
Commit after each step - Add to phase tasks: "Commit changes with descriptive message"
Code review as phase - Include review validation:
**Validation:**
- [ ] Code review agent finds no critical issues
Kanban mindset - Treat each phase like a ticket with ACs
Brick by brick - Phases should be small enough to complete in 2-5 iterations each
Testing loop - Every phase should have runnable validation
Purpose: Pressure-test the PROMPT.md before autonomous execution. A fresh perspective catches ambiguities that the author is blind to.
Spawn a subagent using the Task tool with this prompt template:
You are a senior technical reviewer specializing in PRDs and implementation plans
for autonomous AI execution. You have NO prior context about this project -
you're coming in completely blind.
Your task: Review the PRD at `[PATH_TO_PROMPT.md]` and provide a critical assessment.
## Review Instructions
1. **Read the PRD thoroughly** - understand what's being built
2. **Read relevant existing code** mentioned in the PRD to understand patterns
3. **Produce a review with these sections:**
### A. What I Expect to Happen
Describe concretely what an AI agent executing this plan would build.
Be specific about deliverables.
### B. Implicit Intent I'm Inferring
What is NOT explicitly stated but seems to be the underlying goal?
What assumptions is this plan making?
### C. Pre-Mortem: What Could Go Wrong
Imagine this ralph loop fails. Most likely causes:
- Ambiguous requirements that could be misinterpreted
- Missing context an AI agent would need
- Technical pitfalls or edge cases not addressed
- Validation criteria too vague or too strict
- Dependencies or blockers not mentioned
### D. Confidence Assessment
On a scale of 1-10, how confident are you that an AI agent could execute
this plan successfully without human intervention? Explain your rating.
### E. Recommended Changes (if any)
Specific, actionable changes to improve the PRD before execution.
Be brutally honest. Find problems BEFORE execution, not after.
| Confidence | Action |
|---|---|
| 8-10 | Ready to execute. Minor polish optional. |
| 6-7 | Address critical issues, re-review optional. |
| 4-5 | Significant gaps. Must fix and re-review. |
| 1-3 | Fundamental problems. Consider /interview again. |
Default: Always run blind review for prompts with 4+ phases or novel implementations.
This skill generates prompts compatible with your /ralph-loop plugin:
/ralph-loop-creator "Build a CLI tool that converts markdown to PDF"
→ Creates: ./docs/md-to-pdf/PROMPT.md
→ Runs blind review (confidence: 8/10)
→ Suggests: /ralph-loop "$(cat ./docs/md-to-pdf/PROMPT.md)" --max-iterations 30 --completion-promise "MD_TO_PDF_COMPLETE"
For complex features, use the full workflow:
/interview docs/X/prd.md # Flesh out requirements
/ralph-loop-creator docs/X/prd.md # Generate structured prompt + blind review
/ralph-loop ... # Execute