From enhance
Analyzes prompts in files for clarity, structure, examples, constraints, and reliability; reports issues and applies auto-fixes.
npx claudepluginhub agent-sh/enhance --plugin enhanceThis skill uses the workspace's default tool permissions.
Analyze prompts for clarity, structure, examples, and output reliability.
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Analyze prompts for clarity, structure, examples, and output reliability.
const args = '$ARGUMENTS'.split(' ').filter(Boolean);
const targetPath = args.find(a => !a.startsWith('--')) || '.';
const fix = args.includes('--fix');
| Skill | Focus | Use When |
|---|---|---|
enhance-prompts | Prompt quality (clarity, structure, examples) | General prompts, system prompts, templates |
enhance-agent-prompts | Agent config (frontmatter, tools, model) | Agent files with YAML frontmatter |
Run Analyzer - Execute the JavaScript analyzer to get findings:
node -e "const a = require('./lib/enhance/prompt-analyzer.js'); console.log(JSON.stringify(a.analyzeAllPrompts('.'), null, 2));"
For a specific path: a.analyzeAllPrompts('./plugins/enhance')
For a single file: a.analyzePrompt('./path/to/file.md')
Parse Results - The analyzer returns JSON with summary and findings
Filter - Apply certainty filtering based on --verbose flag
Report - Format findings as markdown output
Fix - If --fix flag, apply auto-fixes from findings
The JavaScript analyzer (lib/enhance/prompt-analyzer.js) implements all detection patterns including AST-based code validation. The patterns below are reference documentation.
Effective system prompts include: Role/Identity, Capabilities & Constraints, Instruction Priority, Output Format, Behavioral Directives, Examples, Error Handling.
Minimal Template:
<system>
You are [ROLE]. [PURPOSE].
Key constraints: [CONSTRAINTS]
Output format: [FORMAT]
When uncertain: [HANDLING]
</system>
Claude is fine-tuned for XML tags. Use: <role>, <constraints>, <output_format>, <examples>, <instructions>, <context>
<constraints>
- Maximum response length: 500 words
- Use only Python 3.10+ syntax
</constraints>
| Use CoT | Don't Use CoT |
|---|---|
| Complex multi-step reasoning | Simple factual questions |
| Math and logic problems | Classification tasks |
| Code debugging | When model has built-in reasoning |
Key: Modern models (Claude 4.x, o1/o3) perform CoT internally. "Think step by step" is redundant.
Helps: Creative tasks, tone/style, roleplay Doesn't help: Accuracy tasks, factual retrieval, complex reasoning
Better: "Approach systematically, showing work" vs "You are an expert"
Priority: System > Developer > User > Retrieved Content
Include explicit priority in prompts with multiple constraint sources.
Positive alternatives are more effective than negatives:
| Less Effective | More Effective |
|---|---|
| "Don't use markdown" | "Use prose paragraphs" |
| "Don't be vague" | "Use specific language" |
Lost-in-the-Middle: Models weigh beginning and end more heavily.
Place critical constraints at start, examples in middle, error handling at end.
High-level instructions ("Think deeply") outperform step-by-step guidance. "Think step-by-step" is redundant with modern models.
| Anti-Pattern | Problem | Fix |
|---|---|---|
| Vague references | "The above code" loses context | Quote specifically |
| Negative-only | "Don't do X" without alternative | State what TO do |
| Aggressive emphasis | "CRITICAL: MUST" | Use normal language |
| Redundant CoT | Wastes tokens | Let model manage |
| Critical info buried | Lost-in-the-middle | Place at start/end |
Vague Instructions: "usually", "sometimes", "try to", "if possible", "might", "could"
Negative-Only Constraints: "don't", "never", "avoid" without stating what TO do
Aggressive Emphasis: Excessive CAPS (CRITICAL, IMPORTANT), multiple !!
Missing XML Structure: Complex prompts (>800 tokens) without XML tags
Inconsistent Sections: Mixed heading styles, skipped levels (H1→H3)
Critical Info Buried: Important instructions in middle 40%, constraints after examples
Missing Examples: Complex tasks without few-shot, format requests without example
Suboptimal Count: Only 1 example (optimal: 2-5), more than 7 (bloat)
Missing Contrast: No good/bad labeling, no edge cases
Missing WHY: Rules without explanation
Missing Priority: Multiple constraint sections without conflict resolution
Missing Format: Substantial prompts without format specification
JSON Without Schema: Requests JSON but no example structure
Redundant CoT (HIGH): "Think step by step" with modern models
Overly Prescriptive (MEDIUM): 10+ numbered steps, micro-managing reasoning
Prompt Bloat (LOW): Over 2500 tokens, redundant instructions
Vague References (HIGH): "The above code", "as mentioned"
Replace CRITICAL→critical, !!→!, remove excessive caps
Suggest positive alternatives for "don't" statements
## Prompt Analysis: {prompt-name}
**File**: {path}
**Type**: {system|agent|skill|template}
**Token Count**: ~{tokens}
### Summary
- HIGH: {count} issues
- MEDIUM: {count} issues
### Clarity Issues ({n})
| Issue | Location | Fix | Certainty |
### Structure Issues ({n})
| Issue | Location | Fix | Certainty |
### Example Issues ({n})
| Issue | Location | Fix | Certainty |
| Category | Patterns | Auto-Fixable |
|---|---|---|
| Clarity | 4 | 1 |
| Structure | 4 | 0 |
| Examples | 4 | 0 |
| Context | 2 | 0 |
| Output Format | 3 | 0 |
| Anti-Pattern | 4 | 0 |
| Total | 21 | 1 |
<bad_example>
You should usually follow best practices when possible.
Why it's bad: Vague qualifiers reduce determinism. </bad_example>
<good_example>
Follow these practices:
1. Validate input before processing
2. Handle null/undefined explicitly
Why it's good: Specific, actionable instructions. </good_example>
<bad_example>
- Don't use vague language
- Never skip validation
Why it's bad: Only states what NOT to do. </bad_example>
<good_example>
- Use specific, deterministic language
- Always validate input; return structured errors
Why it's good: Each constraint includes positive action. </good_example>
<bad_example>
Think through this step by step:
1. First, analyze the input
2. Then, identify the key elements
Why it's bad: Modern models do this internally. Wastes tokens. </bad_example>
<good_example>
Analyze the input carefully before responding.
Why it's good: High-level guidance without micro-managing. </good_example>
<bad_example>
Respond with a JSON object containing the analysis results.
Why it's bad: No schema or example. </bad_example>
<good_example>
## Output Format
{"status": "success|error", "findings": [{"severity": "HIGH"}]}
Why it's good: Concrete schema shows exact structure. </good_example>
<bad_example>
# Task
[task]
## Background
[500 words...]
## Important Constraints <- buried at end
Why it's bad: Lost-in-the-middle effect. </bad_example>
<good_example>
# Task
## Critical Constraints <- at start
[constraints]
## Background
Why it's good: Critical info at start where attention is highest. </good_example>