npx claudepluginhub delexw/claude-code-miscThis skill is limited to using the following tools:
Prompt evaluation and optimization using the `meta-prompter-mcp` CLI. Returns <OPTIMIZED_PROMPT> for the caller to execute.
Iteratively optimizes prompts using prompt-reviewer for issue detection, user input for ambiguities, and prompt-engineering for fixes until no issues remain. Use on prompt files needing refinement.
Optimizes agent, system, developer prompts and templates. Ports between OpenAI, Claude, Gemini; builds evals. Use for improving reliability, tuning wording, or debugging failures.
Generates meta-prompts for Claude-to-Claude multi-stage workflows (research, plan, execute, refine). Organizes outputs in .prompts/ folders with XML artifacts and SUMMARY.md summaries.
Share bugs, ideas, or general feedback.
Prompt evaluation and optimization using the meta-prompter-mcp CLI. Returns <OPTIMIZED_PROMPT> for the caller to execute.
Raw arguments: $ARGUMENTS
Infer from the arguments:
.implement-assets/meta-prompter if not providedsess-YYYYMMDD-HHMMSS and use it for all files.See references/rules.md for CLI usage, environment variables (model configuration), and error handling.
Execute all steps A through E:
npx meta-prompter-mcp "CONTEXT" via Bash, with --model flag per references/rules.md model resolutionAskUserQuestion tool to ask each question{ "answers": [
{"q": "<question1>", "answer": "<user answer>"},
{"q": "<question2>", "answer": "<user answer>"},
{"q": "<question3>", "answer": "<user answer>"}
] }
rewrite from evaluation resultnpx meta-prompter-mcp "<built PROMPT>" via Bash (with --model flag per references/rules.md model resolution)
"original_prompt" and, if present, "contextual_prompt").global < 8:
global >= 8, then proceed using the <OPTIMIZED_PROMPT>.global < 8 after 3 attempts, use the best-scoring prompt as <OPTIMIZED_PROMPT> and note the score.mkdir -p OUT_DIR via BashOUT_DIR/soul.md