Complete skill for building AI applications with OpenAI Agents SDK (JavaScript/TypeScript), covering text agents, realtime voice agents, multi-agent workflows, and production deployment patterns.
/plugin marketplace add secondsky/claude-skills/plugin install openai-agents@claude-skillsThis skill inherits all available tools. When active, it can use any tool Claude has access to.
references/agent-patterns.mdreferences/cloudflare-integration.mdreferences/common-errors.mdreferences/official-links.mdreferences/realtime-transports.mdscripts/check-versions.shtemplates/cloudflare-workers/worker-agent-hono.tstemplates/cloudflare-workers/worker-text-agent.tstemplates/nextjs/api-agent-route.tstemplates/nextjs/api-realtime-route.tstemplates/realtime-agents/realtime-agent-basic.tstemplates/realtime-agents/realtime-handoffs.tstemplates/realtime-agents/realtime-session-browser.tsxtemplates/shared/error-handling.tstemplates/shared/package.jsontemplates/shared/tracing-setup.tstemplates/text-agents/agent-basic.tstemplates/text-agents/agent-guardrails-input.tstemplates/text-agents/agent-guardrails-output.tstemplates/text-agents/agent-handoffs.tsComplete skill for building AI applications with OpenAI Agents SDK (JavaScript/TypeScript), covering text agents, realtime voice agents, multi-agent workflows, and production deployment patterns.
bun add @openai/agents zod@3
bun add @openai/agents-realtime # For voice agents
Set environment variable:
export OPENAI_API_KEY="your-api-key"
import { Agent, run, tool } from '@openai/agents';
import { z } from 'zod';
const agent = new Agent({
name: 'Assistant',
instructions: 'You are helpful.',
tools: [tool({
name: 'get_weather',
parameters: z.object({ city: z.string() }),
execute: async ({ city }) => `Weather in ${city}: sunny`,
})],
model: 'gpt-4o-mini',
});
const result = await run(agent, 'What is the weather in SF?');
// Voice agent
const voiceAgent = new RealtimeAgent({
voice: 'alloy',
model: 'gpt-4o-realtime-preview',
});
// Browser session
const session = new RealtimeSession(voiceAgent, {
apiKey: sessionApiKey, // From backend!
transport: 'webrtc',
});
// Multi-agent handoffs
const triageAgent = Agent.create({
handoffs: [billingAgent, techAgent],
});
17 Templates: templates/ directory has production-ready examples for all patterns.
Error: Type errors with tool parameters even when structurally compatible.
Workaround: Define schemas inline.
// ❌ Can cause type errors
parameters: mySchema
// ✅ Works reliably
parameters: z.object({ field: z.string() })
Source: GitHub #188
Error: "No existing trace found" with MCP servers.
Workaround:
import { initializeTracing } from '@openai/agents/tracing';
await initializeTracing();
Source: GitHub #580
Error: Agent loops infinitely.
Solution: Increase maxTurns or improve instructions:
const result = await run(agent, input, {
maxTurns: 20,
});
// Or improve instructions
instructions: `After using tools, provide a final answer.
Do not loop endlessly.`
All 9 Errors: Load references/common-errors.md for complete error catalog with workarounds.
Load reference files when working on specific aspects of agent development:
references/agent-patterns.md)Load when:
references/common-errors.md)Load when:
references/realtime-transports.md)Load when:
references/cloudflare-integration.md)Load when:
references/official-links.md)Load when:
Agents: LLMs equipped with instructions and tools.
Tools: Functions with Zod schemas that agents can call automatically.
Handoffs: Multi-agent delegation where agents route tasks to specialists.
Guardrails: Input/output validation for safety (content filtering, PII detection).
Structured Outputs: Type-safe responses using Zod schemas.
Streaming: Real-time event streaming for progressive responses.
Human-in-the-Loop: Require approval for specific tool executions (requiresApproval: true).
For detailed examples, see templates in templates/text-agents/ and templates/realtime-agents/.
// Basic
const result = await run(agent, 'Your question');
// Streaming
const stream = await run(agent, input, { stream: true });
// Structured output
const agent = new Agent({
outputType: z.object({ sentiment: z.enum([...]), confidence: z.number() }),
});
Templates: templates/text-agents/ (8 templates)
const voiceAgent = new RealtimeAgent({
voice: 'alloy', // alloy, echo, fable, onyx, nova, shimmer
model: 'gpt-4o-realtime-preview',
});
const session = new RealtimeSession(voiceAgent, {
apiKey: sessionApiKey,
transport: 'webrtc', // or 'websocket'
});
Voice handoff constraints: Cannot change voice/model during handoff.
Templates: templates/realtime-agents/ (3 templates) | Details: references/realtime-transports.md
export default {
async fetch(request: Request, env: Env) {
const { message } = await request.json();
process.env.OPENAI_API_KEY = env.OPENAI_API_KEY;
const agent = new Agent({
name: 'Assistant',
instructions: 'Be helpful and concise',
model: 'gpt-4o-mini',
});
const result = await run(agent, message, { maxTurns: 5 });
return new Response(JSON.stringify({
response: result.finalOutput,
tokens: result.usage.totalTokens,
}));
},
};
Limitations: No realtime voice, CPU time limits (30s max), memory constraints (128MB).
Templates: templates/cloudflare-workers/ (2 templates)
Details: Load references/cloudflare-integration.md for complete Workers guide.
// app/api/agent/route.ts
import { NextRequest, NextResponse } from 'next/server';
import { Agent, run } from '@openai/agents';
export async function POST(request: NextRequest) {
const { message } = await request.json();
const agent = new Agent({ /* ... */ });
const result = await run(agent, message);
return NextResponse.json({ response: result.finalOutput });
}
Templates: templates/nextjs/ (2 templates)
// Input/output guardrails
const agent = new Agent({
inputGuardrails: [homeworkDetectorGuardrail],
outputGuardrails: [piiFilterGuardrail],
});
// Human approval
const tool = tool({
requiresApproval: true,
execute: async ({ amount }) => `Refunded $${amount}`,
});
// Handle approval loop
while (result.interruption?.type === 'tool_approval') {
result = (await promptUser(result.interruption))
? await result.state.approve(result.interruption)
: await result.state.reject(result.interruption);
}
Templates: templates/text-agents/agent-guardrails-*.ts, agent-human-approval.ts
LLM-Based: Agent decides routing autonomously. Use for adaptive workflows.
Code-Based: Explicit control flow. Use for predictable, deterministic workflows.
Parallel: Run multiple agents concurrently. Use for independent tasks.
Agents as Tools: Wrap agents as tools for manager LLM. Use for specialist delegation.
Details: Load references/agent-patterns.md for comprehensive orchestration strategies with examples.
Template: templates/text-agents/agent-parallel.ts
process.env.DEBUG = '@openai/agents:*';
const result = await run(agent, input);
console.log('Tokens:', result.usage.totalTokens, 'Turns:', result.history.length);
Template: templates/shared/tracing-setup.ts
OPENAI_API_KEY as environment secretmaxTurns to prevent runaway costsgpt-4o-mini where possible for cost efficiency✅ Use when:
❌ Don't use when:
openai-api skill instead)Estimated Savings: ~60%
| Task | Without Skill | With Skill | Savings |
|---|---|---|---|
| Multi-agent setup | ~12k tokens | ~5k tokens | 58% |
| Voice agent | ~10k tokens | ~4k tokens | 60% |
| Error debugging | ~8k tokens | ~3k tokens | 63% |
| Average | ~10k | ~4k | ~60% |
Errors Prevented: 9 documented issues = 100% error prevention
Text Agents (8):
agent-basic.ts - Simple agent with toolsagent-handoffs.ts - Multi-agent triageagent-structured-output.ts - Zod schemasagent-streaming.ts - Real-time eventsagent-guardrails-input.ts - Input validationagent-guardrails-output.ts - Output filteringagent-human-approval.ts - HITL patternagent-parallel.ts - Concurrent executionRealtime Agents (3):
9. realtime-agent-basic.ts - Voice setup
10. realtime-session-browser.tsx - React client
11. realtime-handoffs.ts - Voice delegation
Framework Integration (4):
12. worker-text-agent.ts - Cloudflare Workers
13. worker-agent-hono.ts - Hono framework
14. api-agent-route.ts - Next.js API
15. api-realtime-route.ts - Next.js voice
Utilities (2):
16. error-handling.ts - Comprehensive errors
17. tracing-setup.ts - Debugging
agent-patterns.md - Orchestration strategies (LLM vs code, parallel, agents-as-tools)common-errors.md - All 9 errors with workarounds and sourcesrealtime-transports.md - WebRTC vs WebSocket comparison, latency, debuggingcloudflare-integration.md - Workers setup, limitations, performance, costsofficial-links.md - Documentation, GitHub, npm, community resourcesVersion: SDK v0.3.3 Last Verified: 2025-11-21 Skill Author: Claude Skills Maintainers Production Tested: Yes
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.
Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand colors or style guidelines, visual formatting, or company design standards apply.
Create beautiful visual art in .png and .pdf documents using design philosophy. You should use this skill when the user asks to create a poster, piece of art, design, or other static piece. Create original visual designs, never copying existing artists' work to avoid copyright violations.