Comprehensive framework for analyzing, creating, and refining prompts for AI systems (v2.0 adds Phase 0 expertise loading and quality scoring). Use when creating prompts for Claude, ChatGPT, or other language models, improving existing prompts, or applying evidence-based prompt engineering techniques. Integrates with recursive improvement loop as Phase 2 of 5-phase workflow. Distinct from prompt-forge (which improves system prompts).
/plugin marketplace add DNYoussef/context-cascade/plugin install dnyoussef-context-cascade@DNYoussef/context-cascadeThis skill inherits all available tools. When active, it can use any tool Claude has access to.
CHANGELOG.mdCOGNITIVE-ARCHITECTURE-ADDENDUM.mdRECURSIVE-IMPROVEMENT-ADDENDUM.mdSKILL.md.pre-sidecar-backupexamples/chain-of-thought-example.mdexamples/example-1-basic.mdexamples/few-shot-optimization-example.mdexamples/prompt-engineering-complete-guide.mdgraphviz/prompt-architect-process.dotmetadata.jsonprompt-architect-process.dotreferences/VERILINGUA_VCL_VERIX_Guide_v3_Synthesized.md.pdfreferences/anti-patterns.mdreferences/meta-principles.mdreferences/readme.mdreferences/verification-synthesis.mdresources/optimization-config.jsonresources/optimization-engine.jsresources/pattern-detector.shresources/pattern-library.yamlA comprehensive framework for creating, analyzing, and refining prompts for AI language models using evidence-based techniques, structural optimization principles, and systematic anti-pattern detection.
Before writing ANY code, you MUST check:
.claude/library/catalog.json.claude/docs/inventories/LIBRARY-PATTERNS-GUIDE.mdD:\Projects\*| Match | Action |
|---|---|
| Library >90% | REUSE directly |
| Library 70-90% | ADAPT minimally |
| Pattern exists | FOLLOW pattern |
| In project | EXTRACT |
| No match | BUILD (add to library after) |
Prompt Architect provides a systematic approach to prompt engineering that combines research-backed techniques with practical experience. Whether crafting prompts for Claude, ChatGPT, Gemini, or other systems, this skill applies proven patterns that consistently produce high-quality responses.
This skill is particularly valuable for developing prompts used repeatedly, troubleshooting prompts that aren't performing well, building prompt templates for teams, or optimizing high-stakes tasks where prompt quality significantly impacts outcomes.
Apply Prompt Architect when:
This skill focuses on prompts as engineered artifacts rather than casual conversational queries. The assumption is you're creating prompts that provide compounding value through repeated or systematic use.
This skill operates using Claude Code's built-in tools only. No additional MCP servers required.
Why No MCPs Needed:
Before analyzing or creating prompts, check for domain expertise.
Check for Domain Expertise:
# Detect domain from prompt topic
DOMAIN=$(detect_domain_from_prompt)
# Check if expertise exists
ls .claude/expertise/${DOMAIN}.yaml
Load If Available:
if expertise_exists:
actions:
- Run: /expertise-validate {domain}
- Load: patterns, conventions, known_issues
- Apply: Use expertise to inform prompt design
benefits:
- Apply proven patterns (documented in expertise)
- Avoid known issues (prevent common failures)
- Match conventions (consistent with codebase)
else:
actions:
- Flag: Discovery mode
- Plan: Generate expertise learnings after prompt work
When analyzing existing prompts, apply systematic evaluation across these dimensions:
Evaluate whether the prompt clearly communicates its core objective. Ask:
Strong prompts leave minimal room for misinterpretation of their central purpose.
Evaluate how the prompt is organized:
Effective structure guides the AI naturally through the task.
Determine whether adequate context is provided:
Strong prompts make required context explicit rather than assuming shared understanding.
Assess whether appropriate evidence-based techniques are employed:
Different task categories benefit from different prompting patterns.
Examine for common anti-patterns:
Identify what could go wrong and whether guardrails exist.
Evaluate presentation quality:
Good formatting enhances both machine and human comprehension.
When improving prompts, follow this systematic approach:
Begin by ensuring the central task is crystal clear:
A refined prompt should leave no doubt about its fundamental purpose.
Apply structural optimization:
Each section should build naturally on previous ones.
Enrich prompts with previously implicit or missing context:
Make assumptions explicit rather than hidden.
Incorporate research-validated patterns:
Match techniques to task requirements.
Add self-checking and validation:
Quality mechanisms increase reliability and reduce errors.
Anticipate and handle potential problems:
Proactive edge case handling prevents common failures.
Be explicit about desired output format:
Clear output specification prevents format ambiguity.
For tasks requiring factual accuracy or analytical rigor, instruct the AI to:
Example addition to prompt: "After reaching your conclusion, validate it by considering alternative interpretations of the evidence. Flag any areas where uncertainty exists."
For mathematical, logical, or step-by-step problem-solving tasks:
Example structure: "Solve this problem step by step. For each step, explain your reasoning before moving to the next step. Show all intermediate calculations."
For complex multi-stage workflows:
Example structure: "First, create a detailed plan for how you'll approach this task. Then execute the plan systematically. Finally, verify your results against the original requirements."
For tasks with specific desired patterns:
Example pattern:
Here are examples of the desired format:
Input: [example 1 input]
Output: [example 1 output]
Input: [example 2 input]
Output: [example 2 output]
Now process: [actual input]
For complex reasoning tasks:
Example addition: "Think through this step by step, explaining your reasoning at each stage. After reaching your conclusion, reflect on whether your reasoning was sound."
Critical information receives more attention when placed strategically:
This leverages how attention is distributed across prompts.
For complex prompts, use clear hierarchy:
Hierarchy prevents information overload and aids navigation.
Use clear delimiters to separate different types of content:
data hereDelimiters prevent ambiguity about where instructions end and data begins.
Balance comprehensiveness with parsability:
Longer isn't always better—optimize for clarity and necessity.
Problem: Instructions that allow excessive interpretation
Solution: Use specific action verbs and concrete objectives
Problem: Instructions that conflict with each other
Solution: Prioritize requirements explicitly
Problem: Prompts so intricate they confuse rather than clarify
Solution: Simplify structure, use examples instead of complex rules
Problem: Assuming shared understanding that doesn't exist
Solution: Make context explicit
Problem: Not specifying handling for boundary conditions
Solution: Explicitly address likely edge cases
Problem: Unintentionally biased instructions
Solution: Use neutral language
Optimize for:
Avoid: Over-constraining the creative process
Optimize for:
Avoid: Allowing confirmation bias through leading questions
Optimize for:
Avoid: Vague requirements that lead to non-functional code
Optimize for:
Avoid: Assuming obvious transformation patterns
Optimize for:
Avoid: Binary framing that prevents nuanced responses
While these principles apply broadly, adapt for specific models when possible:
When creating or refining a prompt:
Understand the Task: What are you actually trying to accomplish? What would success look like?
Draft Initial Prompt: Get something down quickly without over-optimizing
Test and Observe: Try the prompt and note what works and what doesn't
Apply Analysis Framework: Use the evaluation dimensions to identify issues
Refine Systematically: Address issues using the refinement methodology
Add Appropriate Techniques: Incorporate evidence-based patterns that fit the task
Optimize Structure: Apply structural principles for clarity and attention
Test Edge Cases: Try variations and boundary conditions
Iterate: Refine based on actual performance
Document: Record what worked for future reference
When helping others improve their prompts:
Explain Your Reasoning: Connect changes to underlying principles so they can generalize
Highlight Patterns: Point out recurring patterns across different prompts
Encourage Experimentation: Guide toward empirical testing rather than pure theory
Build Mental Models: Help them understand how language models process prompts
Promote Best Practices: Encourage documentation, version control, systematic approaches
The goal is building sustainable prompt engineering capabilities, not just fixing individual prompts.
Prompt Architect works with:
See: .claude/skills/META-SKILLS-COORDINATION.md for full coordination matrix.
Create prompt-architect-process.dot to visualize the workflow:
digraph PromptArchitect {
rankdir=TB;
compound=true;
node [shape=box, style=filled, fontname="Arial"];
start [shape=ellipse, label="Start:\nPrompt to Analyze", fillcolor=lightgreen];
end [shape=ellipse, label="Complete:\nOptimized Prompt", fillcolor=green, fontcolor=white];
subgraph cluster_phase0 {
label="Phase 0: Expertise Loading";
fillcolor=lightyellow;
style=filled;
p0 [label="Load Domain\nExpertise"];
}
subgraph cluster_analysis {
label="Analysis Phase";
fillcolor=lightblue;
style=filled;
a1 [label="Intent &\nClarity"];
a2 [label="Structure\nAnalysis"];
a3 [label="Context\nSufficiency"];
a1 -> a2 -> a3;
}
subgraph cluster_refinement {
label="Refinement Phase";
fillcolor=lightcoral;
style=filled;
r1 [label="Apply\nTechniques"];
r2 [label="Optimize\nStructure"];
r1 -> r2;
}
scoring [shape=diamond, label="Quality\nScore >= 0.7?", fillcolor=yellow];
start -> p0;
p0 -> a1;
a3 -> r1;
r2 -> scoring;
scoring -> end [label="yes", color=green];
scoring -> a1 [label="no", color=red, style=dashed];
labelloc="t";
label="Prompt Architect: Analysis & Refinement Workflow (v2.0)";
fontsize=16;
}
Effective prompt engineering combines art and science. These principles provide scientific foundation—research-backed techniques and structural optimization—but applying them requires judgment, creativity, and adaptation to specific contexts.
Master these fundamentals, then develop your own expertise through practice and systematic reflection on results. The most effective prompt engineers combine principled approaches with creative experimentation and continuous learning from actual outcomes.
Prompt Architect is part of the recursive self-improvement loop:
Prompt Architect (PHASE 2 SKILL)
|
+--> Optimizes USER prompts (Phase 2 of 5-phase workflow)
+--> Distinct from prompt-forge (which improves SYSTEM prompts)
+--> Can be improved BY prompt-forge
input_contract:
required:
- prompt_to_analyze: string # The prompt to improve
optional:
- context: string # What the prompt is for
- constraints: list # Specific requirements
- examples: list # Good/bad output examples
- expertise_file: path # Pre-loaded domain expertise
output_contract:
required:
- improved_prompt: string # The optimized prompt
- analysis_report: object # Scoring across dimensions
- changes_made: list # What was changed and why
optional:
- techniques_applied: list # Which evidence-based techniques
- confidence_score: float # How confident in improvement
- expertise_delta: object # Learnings for expertise update
scoring_dimensions:
clarity:
score: 0.0-1.0
weight: 0.25
checks:
- "Single clear action per instruction"
- "No ambiguous terms"
- "Explicit success criteria"
completeness:
score: 0.0-1.0
weight: 0.25
checks:
- "All inputs specified"
- "All outputs defined"
- "Edge cases addressed"
precision:
score: 0.0-1.0
weight: 0.25
checks:
- "Quantifiable where possible"
- "Constraints explicitly stated"
- "Trade-offs documented"
technique_coverage:
score: 0.0-1.0
weight: 0.25
checks:
- "Appropriate techniques applied"
- "Self-consistency for factual tasks"
- "Plan-and-solve for workflows"
overall_score: weighted_average
minimum_passing: 0.7
Prompt improvements are tested against:
benchmark: prompt-generation-benchmark-v1
tests:
- pg-001: Simple Task Prompt
- pg-002: Complex Workflow Prompt
- pg-003: Analytical Task Prompt
minimum_scores:
clarity: 0.7
completeness: 0.7
precision: 0.7
regression: prompt-architect-regression-v1
tests:
- par-001: Clarity improvement preserved (must_pass)
- par-002: Evidence-based techniques applied (must_pass)
- par-003: Uncertainty handling present (must_pass)
namespaces:
- prompt-architect/analyses/{id}: Prompt analyses
- prompt-architect/improvements/{id}: Applied improvements
- prompt-architect/metrics: Performance tracking
- improvement/audits/prompt-architect: Audits of this skill
When prompt intent is unclear:
confidence_check:
if confidence >= 0.8:
- Proceed with optimization
- Document assumptions
if confidence 0.5-0.8:
- Present 2-3 interpretation options
- Ask user to confirm intent
- Document uncertainty areas
if confidence < 0.5:
- DO NOT proceed with optimization
- List what is unclear about the prompt
- Ask specific clarifying questions
- NEVER guess at intent
prompt_analysis_output:
prompt_id: "analysis-{timestamp}"
original_prompt: "..."
improved_prompt: "..."
scores:
clarity: 0.85
completeness: 0.78
precision: 0.82
technique_coverage: 0.75
overall: 0.80
changes:
- location: "Opening instruction"
before: "Analyze the data"
after: "Analyze this dataset to identify trends in user engagement"
rationale: "Replaced vague verb with specific action"
technique: "clarity_enhancement"
techniques_applied:
- self_consistency: true
- plan_and_solve: false
- program_of_thought: false
recommendation: "IMPROVED"
confidence: 0.85
After invoking this skill, you MUST complete ALL items below before proceeding:
[Single Message - ALL in parallel]:
Task("Agent 1", "Task description", "agent-type")
Task("Agent 2", "Task description", "agent-type")
TodoWrite({ todos: [5-10 items] })
Remember: Skill() -> Task() -> TodoWrite() - ALWAYS
This skill should be used when the user asks to "create a hookify rule", "write a hook rule", "configure hookify", "add a hookify rule", or needs guidance on hookify rule syntax and patterns.
Create distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.