Convert inline prompts and prompt arrays to .prompt files with YAML frontmatter. Use when migrating prompts from Flow SDK format to Output SDK prompt files.
Converts Flow SDK inline prompts and arrays to Output SDK `.prompt` files with YAML frontmatter. Use when migrating from Flow SDK to Output SDK format.
/plugin marketplace add growthxai/output-claude-plugins/plugin install growthxai-outputai-flow-migrator-plugins-outputai-flow-migrator@growthxai/output-claude-pluginsThis skill is limited to using the following tools:
This skill guides the conversion of Flow SDK inline prompts, XML prompts, and JavaScript prompt arrays to Output SDK .prompt files with YAML frontmatter.
During Migration:
prompts.ts or prompts.xml to .prompt filesFlow SDK uses several prompt formats that need conversion:
// activities.ts
const prompt = `You are an assistant. Analyze: ${text}`;
const response = await completion( { messages: [ { role: 'user', content: prompt } ] } );
// prompts.ts
export const analyzePrompt = [
{ role: 'system', content: 'You are an expert analyst.' },
{ role: 'user', content: 'Analyze this: {{text}}' }
];
<prompt name="analyze">
<system>You are an expert analyst.</system>
<user>Analyze this: {{text}}</user>
</prompt>
---
provider: openai
model: gpt-4o
temperature: 0.7
---
<system>
System message here.
</system>
<user>
User message with {{ variable }} interpolation.
</user>
| Field | Type | Required | Description |
|---|---|---|---|
provider | string | Yes | openai or anthropic |
model | string | Yes | Model identifier |
temperature | number | No | 0-1 sampling temperature |
max_tokens | number | No | Maximum output tokens |
OpenAI:
gpt-4ogpt-4-turbogpt-3.5-turboAnthropic:
claude-3-5-sonnet-20241022claude-3-opus-20240229claude-3-haiku-20240307Find prompts in the Flow SDK workflow:
# Check for prompt files
ls src/workflows/my-workflow/prompts.*
# Check for inline prompts in activities
grep -n "role: 'system'" src/workflows/my-workflow/activities.ts
grep -n "role: 'user'" src/workflows/my-workflow/activities.ts
Name format: promptName@version.prompt
analyzeDocument@v1.prompt
generateSummary@v1.prompt
extractEntities@v1.prompt
// Before (activities.ts)
const systemPrompt = 'You are a document analyzer.';
const userPrompt = `Analyze this document: ${documentText}`;
# After (analyzeDocument@v1.prompt)
---
provider: openai
model: gpt-4o
temperature: 0.3
---
<system>
You are a document analyzer.
</system>
<user>
Analyze this document: {{ documentText }}
</user>
// Before (prompts.ts)
export const summarizePrompt = [
{ role: 'system', content: 'You summarize text concisely.' },
{ role: 'user', content: 'Summarize: {{text}}\nMax length: {{maxLength}}' }
];
# After (summarize@v1.prompt)
---
provider: openai
model: gpt-4o
temperature: 0.5
---
<system>
You summarize text concisely.
</system>
<user>
Summarize: {{ text }}
Max length: {{ maxLength }}
</user>
<!-- Before (prompts.xml) -->
<prompt name="extract">
<system>You extract key entities from text.</system>
<user>
Extract entities from:
{{#if includeContext}}
Context: {{context}}
{{/if}}
Text: {{text}}
</user>
</prompt>
# After (extract@v1.prompt)
---
provider: openai
model: gpt-4o
temperature: 0.2
---
<system>
You extract key entities from text.
</system>
<user>
Extract entities from:
{% if includeContext %}
Context: {{ context }}
{% endif %}
Text: {{ text }}
</user>
// Before (activities.ts)
import { summarizePrompt } from './prompts';
export async function summarize( text: string ): Promise<string> {
const response = await completion( {
model: 'gpt-4',
messages: summarizePrompt.map( m => ( {
...m,
content: m.content.replace( '{{text}}', text )
} ) )
} );
return response.content;
}
// After (steps.ts)
import { step, z } from '@output.ai/core';
import { generateText } from '@output.ai/llm';
export const summarize = step( {
name: 'summarize',
inputSchema: z.object( { text: z.string() } ),
outputSchema: z.string(),
fn: async ( input ) => {
const result = await generateText( {
prompt: 'summarize@v1',
variables: {
text: input.text
}
} );
return result;
}
} );
Important: Convert Handlebars to Liquid.js syntax!
| Handlebars | Liquid.js |
|---|---|
{{variable}} | {{ variable }} |
{{#if cond}} | {% if cond %} |
{{/if}} | {% endif %} |
{{#each items}} | {% for item in items %} |
{{/each}} | {% endfor %} |
{{else}} | {% else %} |
See flow-convert-handlebars-to-liquid for detailed conversion rules.
export const analyzeDocumentPrompt = [
{
role: 'system',
content: `You are a document analysis expert. Analyze documents for:
- Key themes
- Important entities
- Sentiment
- Action items`
},
{
role: 'user',
content: `Document Type: {{documentType}}
{{#if previousAnalysis}}
Previous Analysis:
{{previousAnalysis}}
{{/if}}
Document Content:
{{content}}
Provide a comprehensive analysis.`
}
];
---
provider: openai
model: gpt-4o
temperature: 0.3
max_tokens: 4000
---
<system>
You are a document analysis expert. Analyze documents for:
- Key themes
- Important entities
- Sentiment
- Action items
</system>
<user>
Document Type: {{ documentType }}
{% if previousAnalysis %}
Previous Analysis:
{{ previousAnalysis }}
{% endif %}
Document Content:
{{ content }}
Provide a comprehensive analysis.
</user>
import { step, z } from '@output.ai/core';
import { generateObject } from '@output.ai/llm';
import { AnalysisResultSchema, AnalysisResult } from './types.js';
const AnalyzeDocumentInputSchema = z.object( {
documentType: z.string(),
content: z.string(),
previousAnalysis: z.string().optional()
} );
export const analyzeDocument = step( {
name: 'analyzeDocument',
inputSchema: AnalyzeDocumentInputSchema,
outputSchema: AnalysisResultSchema,
fn: async ( input ) => {
const result = await generateObject<AnalysisResult>( {
prompt: 'analyzeDocument@v1',
variables: {
documentType: input.documentType,
content: input.content,
previousAnalysis: input.previousAnalysis || ''
},
schema: AnalysisResultSchema
} );
return result;
}
} );
{descriptiveName}@{version}.prompt
Examples:
- analyzeDocument@v1.prompt
- generateSummary@v1.prompt
- extractEntities@v2.prompt
- translateContent@v1.prompt
.prompt files{{ var }} not {{var}}generateText() or generateObject() with prompt referenceflow-convert-handlebars-to-liquid - Template syntax conversionflow-convert-activities-to-steps - Step conversionflow-analyze-prompts - Prompt catalogingCreating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.
Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand colors or style guidelines, visual formatting, or company design standards apply.
Create beautiful visual art in .png and .pdf documents using design philosophy. You should use this skill when the user asks to create a poster, piece of art, design, or other static piece. Create original visual designs, never copying existing artists' work to avoid copyright violations.