This skill enriches vague prompts with targeted research and clarification before execution. Should be used when a prompt is determined to be vague and requires systematic research, question generation, and execution guidance.
Transforms vague prompts into actionable requests through systematic codebase research and targeted clarification questions. Automatically invoked when prompts lack specificity, context, or clear execution targets.
/plugin marketplace add rafaelcalleja/claude-market-place/plugin install prompt-improver@claude-market-placeThis skill inherits all available tools. When active, it can use any tool Claude has access to.
references/examples.mdreferences/question-patterns.mdreferences/research-strategies.mdTransform vague, ambiguous prompts into actionable, well-defined requests through systematic research and targeted clarification. This skill is invoked when the hook has already determined a prompt needs enrichment.
Automatic invocation:
Manual invocation:
Assumptions:
This skill follows a 4-phase approach to prompt enrichment:
Create a dynamic research plan using TodoWrite before asking questions.
Research Plan Template:
Critical Rules:
For detailed research strategies, patterns, and examples, see references/research-strategies.md.
Based on research findings, formulate 1-6 questions that will clarify the ambiguity.
Question Guidelines:
Number of Questions:
For question templates, effective patterns, and examples, see references/question-patterns.md.
Use the AskUserQuestion tool to present your research-grounded questions.
AskUserQuestion Format:
- question: Clear, specific question ending with ?
- header: Short label (max 12 chars) for UI display
- multiSelect: false (unless choices aren't mutually exclusive)
- options: Array of 2-4 specific choices from research
- label: Concise choice text (1-5 words)
- description: Context about this option (trade-offs, implications)
Important: Always include multiSelect field (true/false). User can always select "Other" for custom input.
Proceed with the original user request using:
Execute the request as if it had been clear from the start.
Hook evaluation: Determined prompt is vague Original prompt: "fix the bug" Skill invoked: Yes (prompt lacks target and context)
Research plan:
Research findings:
Questions generated:
User answer: Login authentication failure
Execution: Fix the error handling in auth.py:145 that's causing login failures
Original prompt: "Refactor the getUserById function in src/api/users.ts to use async/await instead of promises"
Hook evaluation: Passes all checks
Skill invoked: No (prompt is clear, proceeds immediately without skill invocation)
For comprehensive examples showing various prompt types and transformations, see references/examples.md.
This SKILL.md contains the core workflow and essentials. For deeper guidance:
Load these references only when detailed guidance is needed on specific aspects of prompt improvement.