Optimizes and engineers prompts for any AI engine with intent analysis and rewrites. Matches: "optimize this prompt", "improve my prompt", "make this prompt better", "rewrite this prompt for [engine]", "help me write a prompt", "prompt engineering", "fix my prompt", "build a prompt for", "craft a prompt", "prompt template for", "analyze my prompt", "better prompt for", "prompt optimization". Do NOT use for: brainstorm or ideation sessions (use brainstorm) — e.g. "brainstorm ideas", "explore options", "think through alternatives", creating visualizations or diagrams (use visualize) — e.g. "draw a flowchart", "visualize this", "create a diagram", general writing improvement without prompt focus (just answer directly) — e.g. "improve this email", "rewrite this paragraph", meeting preparation or debrief (use meeting-prep / meeting-debrief).
From tandemnpx claudepluginhub binatrixai/tandem-marketplace --plugin tandemThis skill is limited to using the following tools:
evals/evals.jsonreferences/anti-patterns.mdreferences/engine-profiles.mdreferences/templates.mdtemplate.mdExecutes pre-written implementation plans: critically reviews, follows bite-sized steps exactly, runs verifications, tracks progress with checkpoints, uses git worktrees, stops on blockers.
Guides idea refinement into designs: explores context, asks questions one-by-one, proposes approaches, presents sections for approval, writes/review specs before coding.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Analyze, optimize, and engineer prompts for any AI engine. Operates in two modes: Analyze + Rewrite (user provides a prompt) or Builder (construct from scratch).
Output follows ${CLAUDE_SKILL_DIR}/template.md and saves to
~/Tandem/creative/prompts/.
For engine-specific rules, load ${CLAUDE_SKILL_DIR}/references/engine-profiles.md.
For anti-pattern detection, load ${CLAUDE_SKILL_DIR}/references/anti-patterns.md.
For template routing, load ${CLAUDE_SKILL_DIR}/references/templates.md.
See METHODOLOGY.md language mirror rule. Reply in the user's language.
Determine mode from the user's input:
Auto-detect the target AI engine from the user's message and context.
| Signal | Engine Profile |
|---|---|
| "Claude", "Anthropic", no engine mentioned | Claude |
| "GPT", "ChatGPT", "OpenAI" | GPT-4 |
| "Gemini", "Google AI", "AI Studio" | Gemini |
| "Midjourney", "MJ", "/imagine" | Midjourney |
| "Nano Banana", "image generation", "generate image" | Nano Banana |
Load the matching profile from ${CLAUDE_SKILL_DIR}/references/engine-profiles.md.
Rules:
AskUserQuestion: "Which AI tool will you use this prompt with?"
Options: ["Claude", "ChatGPT/GPT-4", "Gemini", "Midjourney", "Nano Banana", "Other"]
If "Other" selected, use the Universal Fingerprint from engine-profiles.md.
Extract 9 dimensions from the user's prompt:
| # | Dimension | Description |
|---|---|---|
| 1 | Task definition | What the prompt asks the AI to do |
| 2 | Input specification | What data/context is provided |
| 3 | Desired output format | What shape the result should take |
| 4 | Operational constraints | Boundaries, limits, requirements |
| 5 | Background context | Domain, project, prior decisions |
| 6 | Intended audience | Who will consume the output |
| 7 | Session memory needs | What the AI should remember |
| 8 | Completion criteria | How to know the task is done |
| 9 | Reference examples | Any examples of desired output |
For each dimension, mark as:
Missing dimensions with HIGH impact on output quality trigger clarifying questions in Step 3.
Load ${CLAUDE_SKILL_DIR}/references/anti-patterns.md.
Scan the user's prompt against the top 15 patterns. For each detected pattern: name it, explain why it hurts, suggest a concrete fix. Present all detected anti-patterns as a brief report BEFORE the rewrite.
Clarifying questions: If critical information is missing (from Step 2), ask max 3 clarifying questions via AskUserQuestion (one at a time). Only ask if the missing info would materially change the output.
AskUserQuestion: "[Specific question about missing critical dimension]"
Options: ["[Contextual option 1]", "[Contextual option 2]", "Skip -- use your best judgment"]
Load ${CLAUDE_SKILL_DIR}/references/templates.md. Auto-select the best template:
| Task Type | Template |
|---|---|
| Simple one-shot task | RTF |
| Professional/business writing | CO-STAR |
| Multi-step sequential workflow | RISEN |
| Brand voice, creative writing | CRISPE |
| Logic, math, debugging, analysis | Chain of Thought |
| Format consistency via examples | Few-Shot |
| Code editing in IDE tools | File-Scope |
| Autonomous agent tasks | ReAct + Stop Conditions |
| Image generation (general) | Visual Descriptor |
| Image editing/modification | Reference Image Editing |
| ComfyUI/Stable Diffusion pipelines | ComfyUI |
| Analyzing existing prompts | Prompt Decompiler |
Route silently -- do NOT present template choice to user unless they ask.
Apply only bounded-effect techniques to the rewrite:
1. Role assignment -- Expert identity calibration. Assign a specific expert persona that matches the task domain.
2. Few-shot examples -- 2-5 examples for format consistency. Include when the desired output format is non-obvious. Skip for simple tasks.
3. XML structural tags -- For Claude: use <tags>. For GPT/Gemini: use
markdown headers (##). Match the engine's preferred syntax from the profile.
4. Grounding anchors -- Anti-hallucination rules for factual tasks. Add "Only use information from the provided context" or similar constraints when the task involves facts, data, or references.
5. Chain of Thought -- Step-by-step reasoning for complex tasks. ONLY for non-reasoning models. SKIP CoT for o1, o3, DeepSeek-R1 (these have built-in reasoning; adding CoT degrades their performance).
Explicitly EXCLUDED techniques: Tree of Thought, Graph of Thought, Universal Self-Consistency, prompt chaining, multi-step chains.
Compress the rewrite without losing meaning:
Present the output in this order:
1. Anti-patterns detected (from Step 3): Brief list with fixes. If none detected, state "No anti-patterns detected."
2. 9-dimension analysis summary:
| Dimension | Status | Notes |
|---|---|---|
| Task definition | Extracted/Inferred/Missing | [brief note] |
| ... | ... | ... |
3. Key changes explained: What was improved and why (2-5 bullet points). Educational -- help the user understand prompt engineering principles.
4. Optimized prompt: Single fenced code block, clearly demarcated, ready to copy-paste.
[The complete rewritten prompt here]
5. Strategy note: One line: which template and techniques were applied. Example: "Used RTF template with role assignment and grounding anchors for Claude."
6. Save offer:
For text engine prompts:
AskUserQuestion: "What would you like to do with this prompt?"
Options: ["Save as-is", "Refine further", "Try a different engine", "Done -- don't save"]
For Nano Banana / image engine prompts:
AskUserQuestion: "What would you like to do with this prompt?"
Options: ["Generate this image with Nano Banana", "Save as-is", "Refine further", "Try a different engine", "Done -- don't save"]
When the detected engine is Nano Banana (or any image generation engine) and the user selects "Generate this image with Nano Banana": pass the optimized prompt to the nano-banana skill using its direct mode format ("generate: [optimized prompt]"). The nano-banana skill will handle generation, file save, and auto-open.
1. Save prompt file:
Write to ~/Tandem/creative/prompts/prompt-YYYY-MM-DD-slug.md using
${CLAUDE_SKILL_DIR}/template.md. The slug is: lowercase task summary,
spaces to hyphens, strip special characters, truncate to 40 characters.
Create the directory if it does not exist.
2. Log to stats.json:
Read ~/Tandem/stats.json. If it does not exist, create it as [].
Parse the JSON array, append a new entry, and write it back:
{
"type": "prompt-master",
"action": "optimized",
"count": 1,
"timeSavedMinutes": 10,
"description": "Prompt: [slug] for [engine]",
"timestamp": "<current ISO 8601 UTC>"
}
3. Run /sync:
After appending to stats.json, follow the /sync workflow from
tandem-skills/core/sync/SKILL.md to rebuild ~/Tandem/dashboard.html with
updated statistics. If /sync fails, continue -- the prompt file is the
primary deliverable.
4. Confirm to user:
Report the saved file path: "Prompt saved to ~/Tandem/creative/prompts/prompt-YYYY-MM-DD-slug.md"
When the user has no starting prompt, construct one from scratch via guided Q&A.
Q1: Task definition
AskUserQuestion: "What do you want the AI to do?"
Free text response. Capture the core task.
Q2: Target engine
AskUserQuestion: "Which AI tool will you use?"
Options: ["Claude", "ChatGPT/GPT-4", "Gemini", "Midjourney", "Nano Banana", "Other"]
Load the selected engine profile.
Q3: Output format
For text engines:
AskUserQuestion: "What output format do you need?"
Options: ["Paragraph", "Code", "List/Steps", "Table", "JSON/Structured"]
For image engines:
AskUserQuestion: "What type of image do you need?"
Options: ["Photo-realistic image", "Illustration/Art", "Logo/Icon", "Edit existing image"]
After collecting answers, construct a draft prompt from the responses and proceed to Step 4 (Framework Routing) through Step 7 (Delivery). Skip Steps 1-3 since the information was gathered via Q&A.
Memory is user-triggered only. Offer to remember: