npx claudepluginhub growthxai/outputWorkflow development tools
Claude Code plugins for the Slidev presentation framework
Bundled plugins for actuating and debugging the Chrome browser.
Claude Code marketplace entries for the plugin-safe Antigravity Awesome Skills library and its compatible editorial bundles.
Share bugs, ideas, or general feedback.
The open-source TypeScript framework for building AI workflows and agents. Designed for Claude Code — describe what you want, Claude builds it, with all the best practices already in place.
One framework. Prompts, evals, tracing, cost tracking, orchestration, credentials. No SaaS fragmentation. No vendor lock-in. Everything in your codebase, everything your AI coding agent can reach.
Every piece of the AI stack is becoming a separate subscription. Prompts in one tool. Traces in another. Evals in a third. Cost tracking across five dashboards. None of them talk to each other. Half of them will get acquired or shut down before your product ships.
Output brings everything together. One TypeScript framework, extracted from thousands of production AI workflows. Best practices baked in so beginners ship professional code from day one, and experienced AI engineers stop rebuilding the same infrastructure.
Output is the first framework designed for AI coding agents. The entire codebase is structured so Claude Code can scaffold, plan, generate, test, and iterate on your workflows. Every workflow is a folder — code, prompts, tests, evals, traces, all together. Your agent reads one folder and has full context.
.prompt files with YAML frontmatter and Liquid templating. Version-controlled, reviewable in PRs, deployed with your code. Switch providers by changing one line. No subscription needed to manage your own prompts.
Every LLM call, HTTP request, and step traced automatically. Token counts, costs, latency, full prompt/response pairs. JSON in logs/runs/. Zero config. Claude Code analyzes your traces and fixes issues — because the data is in your file system.
LLM-as-judge evaluators with confidence scores. Inline evaluators for production retry loops. Offline evaluators for dataset testing. Deterministic assertions and subjective quality judges.
Anthropic, OpenAI, Azure, Vertex AI, Bedrock. One API. Structured outputs, streaming, tool calling — all work the same regardless of provider.
Temporal under the hood. Automatic retries with exponential backoff. Workflow history. Replay on failure. Child workflows. Parallel execution with concurrency control. You don't think about Temporal until you need it — then it's already there.
AI apps need a lot of API keys. Sharing .env files is risky, and coding agents shouldn't see your secrets. Output encrypts credentials with AES-256-GCM, scoped per environment and workflow, managed through the CLI. No external vault subscription needed.
npx @outputai/cli init
cd <project-name>
Add your API key to .env:
ANTHROPIC_API_KEY=sk-ant-...
npx output dev
This starts the full development environment:
npx output workflow run blog_evaluator paulgraham_hwh
Inspect the execution:
npx output workflow debug <workflow-id>
For the full getting started guide, see the documentation.
Orchestration layer — deterministic coordination logic, no I/O.
// src/workflows/research/workflow.ts
workflow({
name: 'research',
fn: async (input) => {
const data = await gatherSources(input);
const analysis = await analyzeContent(data);
const quality = await checkQuality(analysis);
return quality.passed ? analysis : await reviseContent(analysis, quality);
}
});
Where I/O happens — API calls, LLM requests, database queries. Each step runs once and its result is cached for replay.
// src/workflows/research/steps.ts
step({
name: 'gatherSources',
fn: async (input) => {
const results = await searchApi(input.topic);
return { sources: results };
}
});
.prompt files with YAML configuration and Liquid templating.
---
provider: anthropic
model: claude-sonnet-4-20250514
temperature: 0
---
<system>You are a research analyst.</system>
<user>Analyze the following sources about {{ topic }}: {{ sources }}</user>
LLM-as-judge evaluation with confidence scores and reasoning.