Comprehensive guide to Output.ai Framework for building durable, LLM-powered workflows orchestrated by Temporal. Covers project structure, workflow patterns, steps, LLM integration, HTTP clients, and CLI commands.
Provides comprehensive guidance on building durable, LLM-powered workflows using the Output.ai Framework with Temporal orchestration. Claude will reference this when you're creating or modifying workflows that require AI orchestration, automatic retries, and distributed execution patterns.
/plugin marketplace add growthxai/output-claude-plugins/plugin install growthxai-outputai-plugins-outputai@growthxai/output-claude-pluginsThis skill is limited to using the following tools:
This project uses Output Framework to build durable, LLM-powered workflows orchestrated by Temporal. Output Framework provides abstractions for creating reliable AI workflows with automatic retry, tracing, and error handling. Developers use it to build workflows like fact checkers, content generators, data extractors, research assistants, and multi-step AI agents.
Each workflow lives in its own folder under src/workflows/ and follows a consistent structure. Workflows define the orchestration logic, calling steps to perform external operations like API calls, database queries, and LLM inference. The system automatically handles retries, timeouts, and distributed execution through Temporal.
Temporal provides durable execution guarantees - if a workflow fails mid-execution, it resumes from the last successful step rather than restarting. Output Framework wraps Temporal's workflow and activity primitives with higher-level abstractions (workflow, step, evaluator) that enforce best practices and provide automatic tracing.
Each workflow is self-contained in a single folder with a predictable structure: workflow.ts contains the deterministic orchestration logic, steps.ts contains I/O operations (API calls, LLM inference), evaluators.ts contains analysis logic returning confidence-scored results, and prompts/*.prompt files define LLM prompts using Liquid.js templates with YAML frontmatter for model configuration.
@output.ai/http (traced, auto-retry)@output.ai/llmz) from @output.ai/core to define input/output schemassrc/workflows/{name}/
workflow.ts # Orchestration logic (deterministic)
steps.ts # I/O operations (APIs, LLM, DB)
evaluators.ts # Analysis steps returning EvaluationResult
prompts/*.prompt # LLM prompts (name@v1.prompt)
scenarios/*.json # Test scenarios
npx output dev # Start dev (Temporal:8080, API:3001)
npx output workflow list # List workflows
# Sync execution (waits for result)
npx output workflow run <name> --input <JSON|JSON_FILE> # Execute and wait
# Async execution
npx output workflow start <name> --input <JSON|JSON_FILE> # Start workflow, returns ID
npx output workflow status <workflowId> # Check execution status
npx output workflow result <workflowId> # Get result when complete
npx output workflow stop <workflowId> # Cancel running workflow
Workflows orchestrate steps. They must be deterministic (no direct I/O).
import { workflow, z } from '@output.ai/core';
import { processData, callApi } from './steps.js';
export default workflow({
name: 'my-workflow',
description: 'What this workflow does',
inputSchema: z.object({ query: z.string() }),
outputSchema: z.object({ result: z.string() }),
fn: async (input) => {
const data = await processData(input);
const result = await callApi(data);
return { result };
}
});
Allowed imports: steps.ts, evaluators.ts, shared_steps.ts, types.ts, consts.ts, utils.ts
Forbidden in workflows: Direct API calls, Math.random(), Date.now(), dynamic imports
Steps contain all I/O operations. They are automatically retried on failure.
import { step, z } from '@output.ai/core';
import { httpClient } from '@output.ai/http';
const client = httpClient({ prefixUrl: 'https://api.example.com' });
export const fetchData = step({
name: 'fetchData',
description: 'Fetch data from external API',
inputSchema: z.object({ id: z.string() }),
outputSchema: z.object({ data: z.any() }),
fn: async ({ id }) => {
const response = await client.get(`items/${id}`).json();
return { data: response };
},
options: {
retry: { maximumAttempts: 3 }
}
});
Use @output.ai/llm for all LLM operations. Prompts are defined in .prompt files.
Prompt file (summarize@v1.prompt):
---
provider: anthropic
model: claude-sonnet
temperature: 0.7
maxTokens: 2000
---
<system>You are a helpful assistant.</system>
<user>Summarize: {{ content }}</user>
Step using prompt:
import { step, z } from '@output.ai/core';
import { generateText, generateObject } from '@output.ai/llm';
export const summarize = step({
name: 'summarize',
inputSchema: z.object({ content: z.string() }),
outputSchema: z.string(),
fn: async ({ content }) => {
return generateText({
prompt: 'summarize@v1',
variables: { content }
});
}
});
// For structured output
export const extractInfo = step({
name: 'extractInfo',
inputSchema: z.object({ text: z.string() }),
outputSchema: z.object({ title: z.string(), summary: z.string() }),
fn: async ({ text }) => {
return generateObject({
prompt: 'extract@v1',
variables: { text },
schema: z.object({ title: z.string(), summary: z.string() })
});
}
});
Available functions: generateText, generateObject, generateArray, generateEnum
Providers: anthropic, openai, azure
Use @output.ai/http for traced HTTP requests with automatic retry.
import { httpClient } from '@output.ai/http';
const client = httpClient({
prefixUrl: 'https://api.example.com',
timeout: 30000,
retry: { limit: 3 }
});
// In a step:
const data = await client.get('endpoint').json();
const result = await client.post('endpoint', { json: payload }).json();
Evaluators analyze data and return confidence-scored results.
import { evaluator, EvaluationStringResult } from '@output.ai/core';
export const judgeQuality = evaluator({
name: 'judgeQuality',
inputSchema: z.string(),
fn: async (content) => {
// Analysis logic
return new EvaluationStringResult({
value: 'good',
confidence: 0.95
});
}
});
import { FatalError, ValidationError } from '@output.ai/core';
// Non-retryable error (workflow fails immediately)
throw new FatalError('Critical failure - do not retry');
// Validation error (input/output schema failure)
throw new ValidationError('Invalid input format');
For workflow planning and implementation:
.claude/agents/workflow_planner.md - Workflow architecture specialist.claude/agents/workflow_quality.md - Workflow quality and best practices specialist.claude/agents/workflow_prompt_writer.md - Prompt file creation and review specialist.claude/agents/workflow_context_fetcher.md - Efficient context retrieval (used by other agents).claude/agents/workflow_debugger.md - Workflow debugging specialistFor workflow planning and implementation:
.claude/commands/plan_workflow.md - Planning command.claude/commands/build_workflow.md - Implementation command.claude/commands/debug_workflow.md - Debugging commandSee .env file for required environment variables (API keys, etc.)
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.