From prompt-optimization
Detects 8 research-based prompt issues (BP-001 to BP-008) like negative instructions or missing structure and applies 3-step optimization flow for improved versions across LLMs.
npx claudepluginhub sniper-fly/souma-recette --plugin prompt-optimizationThis skill uses the workspace's default tool permissions.
1. **Model-Agnostic**: Patterns effective across GPT, Claude, Gemini, etc.
Evaluates prompt quality, optimizes using 58 techniques like CoT, few-shot learning, role-play. Useful for improving clarity, specificity, structure, or generating variations.
Optimizes prompts for LLMs using constitutional AI, chain-of-thought reasoning, and model-specific techniques. Transforms basic instructions into production-ready prompts to improve accuracy, reduce hallucinations, and cut costs.
Optimizes LLM prompts using constitutional AI, chain-of-thought reasoning, and model-specific techniques. Provides guidance, best practices, and checklists for prompt engineering workflows.
Share bugs, ideas, or general feedback.
High confidence research evidence for negative impact.
| ID | Pattern | Research Basis |
|---|---|---|
| BP-001 | Negative Instructions | Attention mechanism structural issue. 75% failure rate in ArXiv studies |
| BP-002 | Vague Instructions | Primary failure cause. 40% of performance variance |
| BP-003 | Missing Output Format | Directly linked to hallucination reduction |
Consistent improvement when addressed.
| ID | Pattern | Research Basis |
|---|---|---|
| BP-004 | Unstructured Prompt | "Structure > Length" confirmed |
| BP-005 | Missing Context | "More context = higher accuracy" confirmed |
| BP-006 | Complex Task Without Decomposition | ICLR 2023: 28% error reduction with decomposition |
Incremental improvements in specific contexts.
| ID | Pattern | Research Basis |
|---|---|---|
| BP-007 | Biased Examples | 40% of few-shot effectiveness depends on exemplar selection |
| BP-008 | No Uncertainty Permission | Allowing "I don't know" reduces hallucination |
Input: Target prompt
Process: Detect patterns (BP-001 through BP-008)
Output: .claude/.rashomon/step1-analysis.md
Contents:
Input: Step 1 analysis Process:
.claude/.rashomon/step2-optimized.mdContents:
Input: Step 2 output Process:
references/execution-quality.yamlCRITICAL: Clean up temporary files after completion.
Apply 4-block pattern IF:
Skip when:
Decompose IF:
Key Insight: Goal is EVALUABLE GRANULARITY with QUALITY CHECKPOINTS, not decomposition itself.
| Classification | Definition | Interpretation |
|---|---|---|
| Structural | Prompt structure, clarity, specificity improvements | Prompt writing technique |
| Context Addition | Project-specific information added from codebase investigation | Information advantage |
| Expressive | Different phrasing, equivalent substance | Neutral |
| Variance | Within LLM probabilistic variance | Original prompt sufficient |
Principle: Distinguish between prompt writing improvements (Structural) and information additions (Context Addition).
Reference: references/execution-quality.yaml for detailed criteria.
references/patterns.yaml - Detailed pattern definitionsreferences/execution-quality.yaml - Quality evaluation criteria