Assess quality of PRPs and work-orders using systematic confidence scoring. Use when evaluating readiness for execution or subagent delegation.
/plugin marketplace add laurigates/claude-plugins/plugin install blueprint-plugin@lgates-claude-pluginsThis skill inherits all available tools. When active, it can use any tool Claude has access to.
This skill provides systematic evaluation of PRPs (Product Requirement Prompts) and work-orders to determine their readiness for execution or delegation.
Activate this skill when:
/prp:create)/blueprint:work-order)Evaluates whether all necessary context is explicitly provided.
| Score | Criteria |
|---|---|
| 10 | All file paths explicit with line numbers, all code snippets included, library versions specified, integration points documented |
| 8-9 | Most context provided, minor gaps that can be inferred from codebase |
| 6-7 | Key context present but some discovery required |
| 4-5 | Significant context missing, will need exploration |
| 1-3 | Minimal context, extensive discovery needed |
Checklist:
src/auth.py:45-60)Evaluates how clear the implementation approach is.
| Score | Criteria |
|---|---|
| 10 | Pseudocode covers all cases, step-by-step clear, edge cases addressed |
| 8-9 | Main path clear, most edge cases covered |
| 6-7 | Implementation approach clear, some details need discovery |
| 4-5 | High-level only, significant ambiguity |
| 1-3 | Vague requirements, unclear approach |
Checklist:
Evaluates whether known pitfalls are documented with mitigations.
| Score | Criteria |
|---|---|
| 10 | All known pitfalls documented, each has mitigation, library-specific issues covered |
| 8-9 | Major gotchas covered, mitigations clear |
| 6-7 | Some gotchas documented, may discover more |
| 4-5 | Few gotchas mentioned, incomplete coverage |
| 1-3 | No gotchas documented |
Checklist:
Evaluates whether executable validation commands are provided.
| Score | Criteria |
|---|---|
| 10 | All quality gates have executable commands, expected outcomes specified |
| 8-9 | Main validation commands present, most outcomes specified |
| 6-7 | Some validation commands, gaps in coverage |
| 4-5 | Minimal validation commands |
| 1-3 | No executable validation |
Checklist:
Evaluates whether test cases are specified.
| Score | Criteria |
|---|---|
| 10 | All test cases specified with assertions, edge cases covered |
| 8-9 | Main test cases specified, most assertions included |
| 6-7 | Key test cases present, some gaps |
| 4-5 | Few test cases, minimal detail |
| 1-3 | No test cases specified |
Checklist:
Overall = (Context + Implementation + Gotchas + Validation) / 4
Overall = (Context + Gotchas + TestCoverage + Validation) / 4
| Score | Readiness | Recommendation |
|---|---|---|
| 9-10 | Excellent | Ready for autonomous subagent execution |
| 7-8 | Good | Ready for execution with some discovery |
| 5-6 | Fair | Needs refinement before execution |
| 3-4 | Poor | Significant gaps, recommend research phase |
| 1-2 | Inadequate | Restart with proper research |
## Confidence Score: X.X/10
| Dimension | Score | Notes |
|-----------|-------|-------|
| Context Completeness | X/10 | [specific observation] |
| Implementation Clarity | X/10 | [specific observation] |
| Gotchas Documented | X/10 | [specific observation] |
| Validation Coverage | X/10 | [specific observation] |
| **Overall** | **X.X/10** | |
**Assessment:** Ready for execution
**Strengths:**
- [Key strength 1]
- [Key strength 2]
**Recommendations (optional):**
- [Minor improvement 1]
## Confidence Score: X.X/10
| Dimension | Score | Notes |
|-----------|-------|-------|
| Context Completeness | X/10 | [specific gap] |
| Implementation Clarity | X/10 | [specific gap] |
| Gotchas Documented | X/10 | [specific gap] |
| Validation Coverage | X/10 | [specific gap] |
| **Overall** | **X.X/10** | |
**Assessment:** Needs refinement before execution
**Gaps to Address:**
- [ ] [Gap 1 with suggested action]
- [ ] [Gap 2 with suggested action]
- [ ] [Gap 3 with suggested action]
**Next Steps:**
1. [Specific research action]
2. [Specific documentation action]
3. [Specific validation action]
## Confidence Score: 8.5/10
| Dimension | Score | Notes |
|-----------|-------|-------|
| Context Completeness | 9/10 | All files explicit, code snippets with line refs |
| Implementation Clarity | 8/10 | Pseudocode covers main path, one edge case unclear |
| Gotchas Documented | 8/10 | Redis connection pool, JWT format issues covered |
| Validation Coverage | 9/10 | All gates have commands, outcomes specified |
| **Overall** | **8.5/10** | |
**Assessment:** Ready for execution
**Strengths:**
- Comprehensive codebase intelligence with actual code snippets
- Validation gates are copy-pasteable
- Known library gotchas well-documented
**Recommendations:**
- Consider documenting concurrent token refresh edge case
## Confidence Score: 5.0/10
| Dimension | Score | Notes |
|-----------|-------|-------|
| Context Completeness | 4/10 | File paths vague ("somewhere in auth/") |
| Implementation Clarity | 6/10 | High-level approach clear, no pseudocode |
| Gotchas Documented | 3/10 | No library-specific gotchas |
| Validation Coverage | 7/10 | Test command present, missing lint/type check |
| **Overall** | **5.0/10** | |
**Assessment:** Needs refinement before execution
**Gaps to Address:**
- [ ] Add explicit file paths (use `grep` to find them)
- [ ] Add pseudocode for token generation logic
- [ ] Research jsonwebtoken gotchas (check GitHub issues)
- [ ] Add linting and type checking commands
**Next Steps:**
1. Run `/prp:curate-docs jsonwebtoken` to create ai_docs entry
2. Use Explore agent to find exact file locations
3. Add validation gate commands from project's package.json
This skill is automatically applied when:
/prp:create generates a new PRP/blueprint:work-order generates a work-orderThe confidence score determines:
grep to find exact file locationsThis skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.