Generate optimized LLM prompts using the bundled prompt engineering guide and optional research. Delegates to the llm-prompt-engineer agent. Use when the user wants to create a new prompt, system instruction, or agent definition from scratch. Triggers on "craft a prompt", "write a prompt", "generate a system prompt", "create an agent prompt", "prompt for", or any request to produce a new LLM prompt.
From provenpx claudepluginhub mjmorales/claude-prove --plugin proveThis skill uses the workspace's default tool permissions.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Facilitates interactive brainstorming sessions using diverse creative techniques and ideation methods. Activates when users say 'help me brainstorm' or 'help me ideate'.
Delegate to the llm-prompt-engineer agent to generate an optimized prompt from a user's requirements.
Gather these inputs from the user (ask if missing):
Do NOT delegate until intent and constraints are clear.
Check if the user passed --research or explicitly asked for live research.
references/prompt-engineering-guide.md and any cached research. Do NOT use WebSearch/WebFetch.--research flag or user request: Tell the agent to research using WebSearch/WebFetch, then cache the results for future use.Pass the gathered requirements to llm-prompt-engineer with these directives:
references/prompt-engineering-guide.md)cache/prompting/ in plugin dir), global (~/.claude/cache/prompting/), project (.prove/cache/prompting/)--research: live web research, cache results)<!-- Primacy effect: critical constraint placed first -->)Ask the user if they want to:
--research)If the user provides a file path, write the final prompt there. Otherwise, output directly.