From prompt-optimizer
Analyze and rewrite prompts to maximize effectiveness with Claude Code. Use this skill when the user asks to "optimize this prompt", "improve my prompt", "make this prompt better", "rewrite this for Claude", "help me write a better prompt", or says things like "I want Claude to do X but I'm not sure how to ask". Also trigger when the user shares a rough idea or draft prompt and asks for help turning it into something actionable. This skill covers prompt improvement, prompt structuring, prompt review, and prompt rewriting for any Claude Code task — coding, refactoring, debugging, planning, or creative work.
npx claudepluginhub ats-kinoshita-iso/agent-workshop --plugin prompt-optimizerThis skill uses the workspace's default tool permissions.
Take a user's raw idea, rough draft, or unstructured prompt and transform it
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Analyzes BMad project state from catalog CSV, configs, artifacts, and query to recommend next skills or answer questions. Useful for help requests, 'what next', or starting BMad.
Take a user's raw idea, rough draft, or unstructured prompt and transform it into a well-structured, optimized prompt that gets the best results from Claude Code. The optimization is grounded in Anthropic's prompt engineering best practices and tailored to the user's current project context.
This skill focuses on how you ask — complementing the planning plugin
which focuses on how you plan.
Ask the user for the prompt they want to optimize, or extract it from the conversation context if they've already shared it. If the user describes an idea rather than a prompt, treat the description as the raw input.
Record the raw prompt verbatim — you'll need it for the side-by-side comparison at the end.
Before optimizing, gather project context that can strengthen the prompt:
Note what you find — the optimized prompt should reference specific files, patterns, and conventions from the project rather than speaking generically.
Keep this scan focused and fast (under 30 seconds). You're looking for context to weave into the prompt, not doing a full research phase.
Evaluate the raw prompt against each dimension in references/OPTIMIZATION-RUBRIC.md. For each dimension, note whether the prompt is strong, weak, or missing it entirely.
The rubric covers: goal clarity, scope boundaries, success criteria, context references, output format, complexity calibration, and Claude Code-specific patterns.
Apply the rubric findings to produce an optimized prompt. Follow these principles:
Use tags like <context>, <constraints>, <output-format>, and <examples>
when the prompt has multiple distinct sections. For simple prompts, plain
language is fine — don't add structure for structure's sake.
The first sentence should state what "done" looks like. Move background and context after the goal.
State what's in scope and what's explicitly out of scope. This prevents Claude from over-engineering or touching unrelated code.
Define how to verify the result — specific tests to pass, linting commands, behavioral expectations, or output format requirements.
Replace generic references with specific ones discovered in Step 2:
src/auth/middleware.ts instead of "the auth code"src/api/routes.ts"bun test passes" instead of "make sure tests pass"Assess whether the task is simple, moderate, or complex:
/research-plan-implement from the planning plugin instead of
trying to capture everything in a single prompt. Explain why decomposition
will get better results. If the user still wants a single prompt, structure it
with explicit phases.The optimized prompt should feel like a better version of what the user wanted to say, not a corporate template. Don't add formality the user didn't use. Don't change the task — clarify it.
Show the output in this format:
Display the rewritten prompt in a fenced code block so the user can copy it directly.
Present a comparison showing each significant change and the reasoning behind it. Format as a list:
For example:
src/api/routes.ts pattern: Gives Claude a concrete example
to follow instead of inventing a new pattern.bun test as success criterion: Gives Claude a verifiable
completion signal.If the task is complex, include a note:
This task involves [architectural decisions / multiple unknowns / cross-cutting changes]. Consider using
/research-plan-implementto break it into a research phase, proposal, and phased plan before implementing.
@file syntax, slash commands,
skills, and CLAUDE.md conventions where relevant. This isn't a generic prompt
optimizer.See references/EXAMPLES.md for before/after examples across different complexity levels.