From learning-agents
Reviews prompt/instruction files against Anthropic prompt engineering best practices for clarity, structure, role prompting, and more. Use when evaluating skill files, agent definitions, or instruction chunks.
npx claudepluginhub unsupervisedcom/deepwork --plugin learning-agentsThis skill is limited to using the following tools:
Review a prompt or instruction file against Anthropic's prompt engineering best practices and provide structured, actionable feedback.
Refines SKILL.md prompt files as an AI agent: improves structure, wording, organization, spelling, grammar, and clarity per best practices while preserving intent and format.
Analyzes SKILL.md files, plugin prompts, and command instructions; scores clarity, safety, effectiveness, completeness, conciseness; provides rewrites and edits.
Reviews and analyzes LLM prompts against prompt-engineering principles, providing detailed assessments without modifying files. Useful for optimizing prompt quality in AI applications.
Share bugs, ideas, or general feedback.
Review a prompt or instruction file against Anthropic's prompt engineering best practices and provide structured, actionable feedback.
$ARGUMENTS is the path to the file to review. If not provided, ask the user which file to review.
Fetch the latest Anthropic prompt engineering guidance to ground your review in current recommendations:
WebFetch https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview
Use the fetched content as your primary reference for evaluation criteria. If the fetch fails, proceed using your built-in knowledge of Anthropic prompt engineering best practices.
Read the file at the path specified in $ARGUMENTS. If the path is relative, resolve it from the current working directory.
If the file does not exist, inform the user and stop.
Before evaluating, identify how this prompt will be used:
!command dynamic context injection, Claude Code skill files, or agent core-knowledge files)This context affects how you evaluate the prompt. Instruction chunks, for example, must work well when composed with other instructions and should avoid conflicting with likely surrounding context.
Evaluate the file against each of the following criteria. For each criterion, assess whether the prompt follows it well, partially, or poorly.
Clarity and Specificity
Structure and Formatting
<instructions>, <example>, <context>)?Role and Identity Prompting
Examples and Demonstrations
Handling Ambiguity and Edge Cases
Output Format Specification
Composability (for instruction chunks)
Conciseness and Signal-to-Noise Ratio
Variable and Placeholder Usage
Task Decomposition
Output the review in the following format. Be direct and specific. Every recommendation must point to a concrete line or section in the file and explain exactly what to change.
<output_format>
{filename}Prompt type: {standalone system prompt | instruction chunk | template} Overall grade: {A | B | C | D | F}
One-sentence summary of the prompt's overall quality.
For each issue:
| Priority | Recommendation | Effort |
|---|---|---|
| {1, 2, ...} | {Brief description} | {Small / Medium / Large} |
</output_format>