npx claudepluginhub growthxai/output --plugin outputaiThis skill is limited to using the following tools:
This skill documents how to create `.prompt` files for LLM operations in Output SDK workflows. Prompt files use YAML frontmatter for configuration and Liquid.js templating for dynamic content.
Outlines standard folder structure conventions for Output SDK workflows. Use when creating new workflows, reorganizing files, understanding file placement, or reviewing compliance.
Designs reusable, parameterized prompt templates for consistent LLM outputs. Covers anatomy, variables, patterns, composition, quality criteria, and artefacts.
Writes, refactors, and evaluates LLM prompts, generating optimized templates, structured output schemas, evaluation rubrics, and test suites for LLM applications.
Share bugs, ideas, or general feedback.
This skill documents how to create .prompt files for LLM operations in Output SDK workflows. Prompt files use YAML frontmatter for configuration and Liquid.js templating for dynamic content.
Prompt files are stored INSIDE the workflow folder:
src/workflows/{workflow-name}/
├── workflow.ts
├── steps.ts
├── types.ts
└── prompts/
├── analyzeContent@v1.prompt
├── generateSummary@v1.prompt
└── extractData@v2.prompt
Important: Prompts are workflow-specific and live inside the workflow folder, NOT in a shared location.
{promptName}@v{version}.prompt
Examples:
generateImageIdeas@v1.promptanalyzeContent@v1.promptsummarizeText@v2.promptThe version suffix (@v1, @v2) allows for prompt versioning without breaking existing code.
---
provider: anthropic
model: claude-sonnet-4
temperature: 0.7
maxTokens: 4096
---
<system>
System instructions go here.
</system>
<user>
User message with {{ variable }} placeholders.
</user>
---
provider: anthropic # LLM provider: anthropic, openai, vertex
model: claude-sonnet-4 # Model identifier
---
---
provider: anthropic
model: claude-sonnet-4
temperature: 0.7 # 0.0 to 1.0, default varies by provider
maxTokens: 4096 # Maximum output tokens
providerOptions: # Provider-specific options
thinking:
type: enabled
budgetTokens: 2000
---
---
provider: anthropic
model: claude-sonnet-4
temperature: 0.7
maxTokens: 8192
---
---
provider: anthropic
model: claude-sonnet-4
temperature: 0.7
maxTokens: 32000
providerOptions:
thinking:
type: enabled
budgetTokens: 2000
---
---
provider: openai
model: gpt-5
temperature: 0.7
maxTokens: 4096
---
---
provider: vertex
model: gemini-3-pro
temperature: 0.7
maxTokens: 8192
---
Use XML-style tags to define message roles:
<system>
You are an expert at analyzing technical content.
Your responses should be clear and structured.
</system>
<user>
Please analyze the following content:
{{ content }}
</user>
<assistant>
I'll analyze this content step by step...
</assistant>
<user>
Analyze this content about {{ topic }}:
{{ content }}
Generate {{ numberOfIdeas }} ideas.
</user>
<system>
You are an expert content analyzer.
{% if colorPalette %}
**Color Palette Constraints:** {{ colorPalette }}
{% endif %}
{% if artDirection %}
**Art Direction Constraints:** {{ artDirection }}
{% endif %}
</system>
<user>
Analyze each of these items:
{% for item in items %}
- {{ item.name }}: {{ item.description }}
{% endfor %}
</user>
<user>
Generate {{ numberOfIdeas | default: 3 }} ideas for {{ topic }}.
</user>
Based on a real prompt file (generateImageIdeas@v1.prompt):
---
provider: anthropic
model: claude-sonnet-4
temperature: 0.7
maxTokens: 32000
providerOptions:
thinking:
type: enabled
budgetTokens: 2000
---
<system>
You are an expert at creating structured, precise infographic prompts optimized for Gemini's image generation model.
Your task is to generate prompts for informational infographics that illustrate key concepts from the provided content.
CRITICAL RULES you MUST follow:
- Use Markdown dashed lists to specify constraints
- Use ALL CAPS for "MUST" requirements to ensure strict adherence
- Include specific compositional constraints (e.g., rule of thirds, lighting)
- Always include negative constraints to prevent unwanted elements
- Keep each infographic focused on ONE clear concept
{% if colorPalette %}
**Color Palette Constraints:** {{ colorPalette }}
{% endif %}
{% if artDirection %}
**Art Direction Constraints:** {{ artDirection }}
{% endif %}
</system>
<user>
Generate {{ numberOfIdeas }} structured infographic prompts based on key topics from this content.
<content>
{{ content }}
</content>
Each prompt MUST follow this structure:
Create an infographic about [specific topic]. The infographic MUST follow ALL of these constraints:
- The infographic MUST use the reference images as a visual style guide
- The composition MUST follow the rule of thirds for visual balance
- The infographic MUST use clean, minimal design with simple lines and shapes
{% if colorPalette %}- The color palette MUST strictly follow: {{ colorPalette }}{% endif %}
{% if artDirection %}- The art direction MUST strictly follow: {{ artDirection }}{% endif %}
- NEVER include any watermarks, logos, or decorative overlays
- NEVER use generic AI art buzzwords like "hyperrealistic"
Focus on the most important concepts that would benefit from visual explanation.
</user>
import { generateText, Output } from '@outputai/llm';
import { z } from '@outputai/core';
const { output } = await generateText({
prompt: 'generateImageIdeas@v1', // References prompts/generateImageIdeas@v1.prompt
variables: {
content: 'Solar panel technology explained...',
numberOfIdeas: 3,
colorPalette: 'blue and green tones',
artDirection: 'minimalist style'
},
output: Output.object({
schema: z.object({
ideas: z.array(z.string())
})
})
});
// output contains { ideas: [...] }
import { generateText } from '@outputai/llm';
const { result } = await generateText({
prompt: 'summarize@v1',
variables: {
content: 'Long article text...',
maxLength: 200
}
});
// result contains the generated text string
Prompts can load skill files that provide lazy-loaded instructions to the LLM. Skills keep the initial context small while giving the LLM access to deep expertise on demand. See output-dev-skill-file for the full guide on creating skill files.
The simplest approach is colocated auto-discovery. Place .md files in a skills/ folder next to your prompt file:
src/workflows/{workflow-name}/
└── prompts/
├── writing_assistant@v1.prompt
└── skills/
├── clarity_guidelines.md
└── structure_guide.md
The prompt file does not need any special configuration. Output auto-discovers the skills/ directory and injects a load_skill tool the LLM can call. Mention load_skill in the system message so the LLM knows to use it:
<system>
You are an expert technical writing assistant.
Use load_skill to get the full instructions for any skill before applying it.
</system>
You can also list skill paths explicitly in frontmatter, or create inline skills in code. See output-dev-skill-file for all three methods.
Prompts work with both generateText and the Agent class. Use Agent for multi-step tool loops and stateful conversations. See output-dev-agent-class for the full guide.
import { Agent, Output } from '@outputai/llm';
const agent = new Agent({
prompt: 'writing_assistant@v1',
variables: {
content_type: 'documentation',
focus: 'clarity',
content: input.content
},
output: Output.object({ schema: reviewSchema }),
maxSteps: 5
});
const { output } = await agent.generate();
<system>
CRITICAL RULES you MUST follow:
- Rule 1
- Rule 2
- NEVER do X
- ALWAYS do Y
</system>
<user>
Analyze the following:
<content>
{{ content }}
</content>
<requirements>
{{ requirements }}
</requirements>
</user>
<system>
You analyze sentiment. Return: positive, negative, or neutral.
</system>
<user>
"I love this product!"
</user>
<assistant>
positive
</assistant>
<user>
"{{ text }}"
</user>
When making significant changes, create a new version:
analyzeContent@v1.prompt - OriginalanalyzeContent@v2.prompt - Improved with better examplesUpdate the step to use the new version:
prompt: 'analyzeContent@v2' // Changed from v1
{% if optionalField %}
Additional context: {{ optionalField }}
{% endif %}
---
provider: anthropic
model: claude-sonnet-4
temperature: 0.3
---
<system>
You are a content classifier. Categorize content into exactly one category.
Available categories: {{ categories | join: ", " }}
</system>
<user>
Classify this content:
{{ content }}
</user>
---
provider: anthropic
model: claude-sonnet-4
temperature: 0.2
---
<system>
You extract structured data from text. Be precise and only include information explicitly stated.
</system>
<user>
Extract the following fields from this text:
{% for field in fields %}
- {{ field }}
{% endfor %}
Text:
{{ text }}
</user>
---
provider: anthropic
model: claude-sonnet-4
temperature: 0.8
---
<system>
You are a creative writer. Generate engaging content based on the given parameters.
</system>
<user>
Generate {{ count }} {{ type }} about {{ topic }}.
Requirements:
{{ requirements }}
</user>
prompts/ folder inside workflow directory{promptName}@v{version}.promptprovider and model<system>, <user>, <assistant>){{ variableName }} syntax{% if %}...{% endif %} syntaxoutput-dev-skill-file - Creating skill files for promptsoutput-dev-agent-class - Using the Agent class with promptsoutput-dev-step-function - Using prompts in step functionsoutput-dev-evaluator-function - Using prompts in evaluatorsoutput-dev-folder-structure - Understanding prompts folder locationoutput-dev-workflow-function - Orchestrating LLM-powered steps