enhance
Prompt enhancement skill for Claude Code.
Analyze → Discover → Rewrite → Execute → Verify.
Takes your vague task description, identifies what's missing, rewrites it into a structured prompt using proven techniques, generates testable success criteria, and executes with built-in verification.
The Problem
Research shows most users write vague, underspecified prompts. "Build me a dashboard" has no metrics, no design spec, no data source, no output format. The model guesses — and you get something, but not what you needed.
Benchmarks don't catch this because they test well-defined tasks. Real usage is vague.
What /enhance Does
You: /enhance build me a sales dashboard
Enhance:
Issues found:
1. No metrics specified (revenue? orders? conversion?)
2. No design constraints (colors? responsive? typography?)
3. No data source (sample data? CSV? API?)
4. No output format (file path? single file?)
5. No interactivity spec (static? charts? filters?)
6. No quality target ("professional" is not measurable)
Enhanced task:
<role>Senior frontend developer</role>
<task>Build a single-file HTML sales dashboard</task>
<requirements>
1. 4 KPI cards (revenue, orders, AOV, conversion rate)
2. 3 interactive Chart.js charts (line, doughnut, bar)
3. Responsive grid with mobile breakpoint at 768px
4. Consistent 5-color palette, 16px+ body text
5. Embedded realistic sample data
</requirements>
Verification criteria:
1. File opens in browser without errors
2. Contains 4 KPI cards with labeled metrics
3. Contains 3 distinct charts
4. Has @media queries for responsive layout
5. Uses consistent color palette (max 5 colors)
Proceed? [Y/n]
Install
Add the marketplace and install:
/plugin marketplace add jatinmayekar/claude-code-enhance
/plugin install enhance@claude-code-enhance
Or use directly with --plugin-dir:
git clone https://github.com/jatinmayekar/claude-code-enhance.git
claude --plugin-dir ./claude-code-enhance
Works on Mac, Windows, and Linux. Requires Claude Code CLI.
Tested Results
3-way comparison: /enhance vs raw (no skill) vs /iterate (self-correction loop):
| Metric | /enhance | Raw | /iterate |
|---|
| Dashboard | 452 lines, 3 charts, responsive | 608 lines, 2 charts, responsive | 357 lines, 2 charts, 1 @media |
| API | 119 lines, werkzeug hashing, SQLite | 187 lines, bcrypt, SQLAlchemy | 151 lines, minimal hashing |
| Report | HTML + Chart.js (382 lines) | Python script (163 lines) | Python script (150 lines) |
Key finding: /enhance consistently beats /iterate. Fixing the input produces better results than fixing the output. Full results
Principles
Every technique in /enhance is sourced from published research or Anthropic's official documentation:
| Technique | Source | Reference |
|---|
| Prompt rewriting with specificity | Anthropic Prompt Improver | Console docs |
| XML tags for structure | Anthropic Best Practices | Tutorial Ch. 4 |
| Chain-of-thought reasoning | Anthropic "Precognition" | Tutorial Ch. 6 |
| Role prompting | Anthropic "Assigning Roles" | Tutorial Ch. 3 |
| Output format specification | Anthropic "Formatting Output" | Tutorial Ch. 5 |
| Anti-hallucination via web search | Anthropic "Avoiding Hallucinations" | Tutorial Ch. 8 |
| Requirements discovery | "What Prompts Don't Say" (Yang et al., 2025) | arXiv:2505.13360 |
| Verification with external feedback | "When Can LLMs Actually Correct?" (MIT Press, 2025) | TACL |
| Vague-to-specific decomposition | DSPy (Stanford) | dspy.ai |
| Self-correction on failure | CorrectBench (2025) | arXiv:2510.16062 |
/enhance is not invented from scratch — it's a curated integration of proven techniques into a single Claude Code skill.
How It Works