Prompt design, optimization, few-shot learning, and chain of thought techniques for LLM applications.
Provides prompt engineering techniques like few-shot learning, chain-of-thought, and ReAct patterns. Use when you need to optimize LLM outputs for complex reasoning tasks or structured responses.
/plugin marketplace add pluginagentmarketplace/custom-plugin-ai-engineer/plugin install pluginagentmarketplace-ai-engineer-plugin@pluginagentmarketplace/custom-plugin-ai-engineerThis skill inherits all available tools. When active, it can use any tool Claude has access to.
assets/prompt_templates.yamlreferences/PROMPTING_TECHNIQUES.mdscripts/prompt_optimizer.pyMaster the art of crafting effective prompts for LLMs.
# Simple completion prompt
prompt = """
You are a helpful assistant specialized in {domain}.
Task: {task_description}
Context: {relevant_context}
Instructions:
1. {instruction_1}
2. {instruction_2}
Output format: {desired_format}
"""
# Using OpenAI
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4",
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_prompt}
]
)
few_shot_prompt = """
Classify the sentiment of the following reviews.
Examples:
Review: "This product exceeded my expectations!"
Sentiment: Positive
Review: "Terrible quality, broke after one day."
Sentiment: Negative
Review: "It's okay, nothing special."
Sentiment: Neutral
Now classify:
Review: "{user_review}"
Sentiment:
"""
cot_prompt = """
Solve this step by step:
Problem: {problem}
Let's think through this carefully:
1. First, identify what we know...
2. Then, determine what we need to find...
3. Apply the relevant formula/logic...
4. Calculate the result...
Final Answer:
"""
# Generate multiple reasoning paths
responses = []
for _ in range(5):
response = generate_with_cot(prompt, temperature=0.7)
responses.append(response)
# Take majority vote
final_answer = majority_vote(responses)
react_prompt = """
Answer the following question using the ReAct framework.
Question: {question}
Use this format:
Thought: [Your reasoning about what to do next]
Action: [The action to take: Search, Calculate, or Lookup]
Observation: [The result of the action]
... (repeat Thought/Action/Observation as needed)
Thought: I now have enough information to answer.
Final Answer: [Your answer]
"""
Expert Advisor:
"You are an expert {role} with 20+ years of experience.
Provide detailed, accurate, and actionable advice.
Always cite sources when making claims.
Ask clarifying questions if the request is ambiguous."
Code Assistant:
"You are a senior software engineer specializing in {language}.
Write clean, efficient, and well-documented code.
Follow {style_guide} conventions.
Include error handling and edge cases."
Data Analyst:
"You are a data analyst helping interpret {data_type}.
Explain insights in simple terms for non-technical audiences.
Highlight key findings and recommendations.
Note any limitations or caveats in the analysis."
json_format_prompt = """
Extract the following information and return as JSON:
Text: {text}
Required fields:
- name (string)
- date (ISO 8601 format)
- amount (number)
- category (one of: income, expense, transfer)
Return ONLY valid JSON, no explanation.
"""
structured_output = """
Analyze the text and respond in this exact format:
## Summary
[2-3 sentence summary]
## Key Points
- Point 1
- Point 2
- Point 3
## Recommendations
1. [First recommendation]
2. [Second recommendation]
"""
def analyze_document(document):
# Step 1: Extract key information
extraction_prompt = f"Extract key entities from: {document}"
entities = llm(extraction_prompt)
# Step 2: Analyze relationships
analysis_prompt = f"Analyze relationships between: {entities}"
relationships = llm(analysis_prompt)
# Step 3: Generate summary
summary_prompt = f"Summarize findings: {relationships}"
summary = llm(summary_prompt)
return summary
def build_prompt(task_type, context, examples=None):
base_prompt = PROMPT_TEMPLATES[task_type]
if examples:
example_str = format_examples(examples)
base_prompt = f"{example_str}\n\n{base_prompt}"
if context:
base_prompt = base_prompt.replace("{context}", context)
return base_prompt
| Issue | Cause | Solution |
|---|---|---|
| Inconsistent output | Vague instructions | Add explicit format requirements |
| Hallucinations | No grounding | Provide reference context |
| Verbose responses | No length constraint | Specify max length/format |
| Wrong tone | Missing persona | Add role/style instructions |
| Off-topic answers | Unclear scope | Define boundaries explicitly |
def prompt_with_validation(prompt, validator, max_retries=3):
for _ in range(max_retries):
response = llm.generate(prompt)
if validator(response):
return response
prompt = f"Previous response invalid. {prompt}"
raise ValueError("Max retries exceeded")
| Symptom | Cause | Solution |
|---|---|---|
| Wrong format | Weak instructions | Add explicit examples |
| Too verbose | No length limit | Add word/sentence limits |
| Refuses task | Safety trigger | Rephrase request |
def test_prompt_generates_json():
response = llm.generate(json_prompt)
data = json.loads(response)
assert "field" in data
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.