**Status**: Production Ready | **API Launch**: March 2025 | **SDK**: openai@5.19.1+
Invokes OpenAI Responses API for stateful conversations with built-in tools.
npx claudepluginhub secondsky/claude-skillsThis skill inherits all available tools. When active, it can use any tool Claude has access to.
references/built-in-tools-guide.mdreferences/mcp-integration-guide.mdreferences/migration-guide.mdreferences/reasoning-preservation.mdreferences/responses-vs-chat-completions.mdreferences/setup-guide.mdreferences/stateful-conversations.mdreferences/top-errors.mdscripts/check-versions.shtemplates/background-mode.tstemplates/basic-response.tstemplates/cloudflare-worker.tstemplates/code-interpreter.tstemplates/file-search.tstemplates/image-generation.tstemplates/mcp-integration.tstemplates/package.jsontemplates/stateful-conversation.tstemplates/web-search.tsStatus: Production Ready | API Launch: March 2025 | SDK: openai@5.19.1+
import OpenAI from 'openai';
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
const response = await openai.responses.create({
model: 'gpt-5',
input: 'What are the 5 Ds of dodgeball?',
});
console.log(response.output_text);
const response = await fetch('https://api.openai.com/v1/responses', {
method: 'POST',
headers: {
'Authorization': `Bearer ${env.OPENAI_API_KEY}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: 'gpt-5',
input: 'Hello, world!',
}),
});
const data = await response.json();
console.log(data.output_text);
Load references/setup-guide.md for complete setup with stateful conversations and built-in tools.
The Responses API (/v1/responses) is OpenAI's unified interface for agentic applications launched March 2025. Key Innovation: Preserved reasoning state across turns (unlike Chat Completions which discards it), improving multi-turn performance by ~5% on TAUBench.
Why Use Responses Over Chat Completions? Automatic state management, preserved reasoning, server-side tools, 40-80% better cache utilization, and built-in MCP support.
Load references/responses-vs-chat-completions.md for complete comparison and decision guide.
output.type (message, reasoning, function_call)output.typeLoad references/setup-guide.md for complete rules and best practices.
// First turn
const response1 = await openai.responses.create({
model: 'gpt-5',
input: 'My favorite color is blue.',
});
const conversationId = response1.conversation_id;
// Second turn - model remembers
const response2 = await openai.responses.create({
model: 'gpt-5',
conversation_id: conversationId,
input: 'What is my favorite color?',
});
// Output: "Your favorite color is blue."
Load: references/stateful-conversations.md + templates/stateful-conversation.ts
const response = await openai.responses.create({
model: 'gpt-5',
input: 'Search the web for latest AI news.',
tools: {
web_search: { enabled: true },
},
});
Load: references/built-in-tools-guide.md + templates/web-search.ts
const response = await openai.responses.create({
model: 'gpt-5',
input: 'Calculate the sum of squares from 1 to 100.',
tools: {
code_interpreter: { enabled: true },
},
});
Load: references/built-in-tools-guide.md + templates/code-interpreter.ts
// Upload file
const file = await openai.files.create({
file: fs.createReadStream('document.pdf'),
purpose: 'user_data',
});
// Search file
const response = await openai.responses.create({
model: 'gpt-5',
input: 'Summarize key points from the uploaded document.',
tools: {
file_search: {
enabled: true,
file_ids: [file.id],
},
},
});
Load: references/built-in-tools-guide.md + templates/file-search.ts
const response = await openai.responses.create({
model: 'gpt-5',
input: 'Get weather for San Francisco.',
tools: {
mcp_servers: [
{
url: 'https://weather-mcp.example.com',
tool_choice: 'auto',
},
],
},
});
Load: references/mcp-integration-guide.md + templates/mcp-integration.ts
All tools run server-side: Code Interpreter (Python execution), File Search (RAG), Web Search (real-time), Image Generation (DALL-E).
Enable explicitly:
tools: {
code_interpreter: { enabled: true },
file_search: { enabled: true, file_ids: ['file-123'] },
web_search: { enabled: true },
image_generation: { enabled: true },
}
Load references/built-in-tools-guide.md for complete guide with examples and configuration options.
Automatic state management with conversation IDs eliminates manual message tracking, preserves reasoning, and improves cache utilization by 40-80%.
// Create conversation
const response1 = await openai.responses.create({
model: 'gpt-5',
input: 'Remember: my name is Alice.',
});
// Continue conversation
const response2 = await openai.responses.create({
model: 'gpt-5',
conversation_id: response1.conversation_id,
input: 'What is my name?',
});
Load references/stateful-conversations.md for persistence patterns (Node.js/Redis/KV) and lifecycle management.
Quick changes: messages → input, system role → developer, choices[0].message.content → output_text, /v1/chat/completions → /v1/responses.
Before (Chat Completions):
const messages = [{ role: 'user', content: 'Hello' }];
const response = await openai.chat.completions.create({
model: 'gpt-4o',
messages: messages,
});
messages.push(response.choices[0].message); // Manual history
After (Responses API):
const response = await openai.responses.create({
model: 'gpt-5',
input: 'Hello',
});
const response2 = await openai.responses.create({
model: 'gpt-5',
conversation_id: response.conversation_id, // Automatic state
input: 'Follow-up question',
});
Load references/migration-guide.md for complete migration checklist with tool migration patterns.
Responses can return multiple output types (message, reasoning, function_call, image). Handle each type or use output_text convenience property.
for (const output of response.output) {
if (output.type === 'message') {
console.log('Message:', output.content);
} else if (output.type === 'reasoning') {
console.log('Reasoning:', output.summary);
} else if (output.type === 'function_call') {
console.log('Function:', output.name, output.arguments);
}
}
// Or use convenience property
console.log(response.output_text);
Load references/reasoning-preservation.md for reasoning output details and debugging patterns.
For long-running tasks (>60 seconds), use background: true to run asynchronously and poll for completion.
const response = await openai.responses.create({
model: 'gpt-5',
input: 'Analyze this 50-page document.',
background: true,
});
// Poll for completion
const completed = await openai.responses.retrieve(response.id);
Load templates/background-mode.ts for complete polling pattern with exponential backoff.
Symptom: Model doesn't remember previous turns.
Cause: Not using conversation IDs or creating new conversation each time.
Solution:
// ✅ GOOD: Reuse conversation ID
const conv = await openai.conversations.create();
const response1 = await openai.responses.create({
model: 'gpt-5',
conversation: conv.id, // Same ID
input: 'Question 1',
});
const response2 = await openai.responses.create({
model: 'gpt-5',
conversation: conv.id, // Same ID - remembers previous
input: 'Question 2',
});
Cause: Invalid server URL, missing/expired authorization token.
Solution:
const response = await openai.responses.create({
model: 'gpt-5',
input: 'Test MCP',
tools: [
{
type: 'mcp',
server_url: 'https://mcp.stripe.com', // ✅ Full HTTPS URL
authorization: process.env.STRIPE_OAUTH_TOKEN, // ✅ Valid token
},
],
});
Prevention: Use environment variables for secrets, implement token refresh logic, add retry with exponential backoff.
Cause: Code runs longer than 30 seconds (standard mode limit).
Solution:
// ✅ GOOD: Use background mode for long tasks
const response = await openai.responses.create({
model: 'gpt-5',
input: 'Process this massive dataset',
background: true, // ✅ Up to 10 minutes
tools: [{ type: 'code_interpreter' }],
});
// Poll for results
let result = await openai.responses.retrieve(response.id);
while (result.status === 'in_progress') {
await new Promise(r => setTimeout(r, 5000));
result = await openai.responses.retrieve(response.id);
}
Load references/top-errors.md for all 8 errors with detailed solutions and prevention strategies.
references/setup-guide.md when:references/responses-vs-chat-completions.md when:references/migration-guide.md when:references/built-in-tools-guide.md when:references/mcp-integration-guide.md when:references/stateful-conversations.md when:references/reasoning-preservation.md when:references/top-errors.md when:Before deploying:
Load references/setup-guide.md for complete production checklist with platform-specific considerations.
Questions? Issues?
references/top-errors.md for error solutionsreferences/setup-guide.md for complete setupreferences/migration-guide.md for Chat Completions migrationtemplates/ for working examplesYou MUST use this before any creative work - creating features, building components, adding functionality, or modifying behavior. Explores user intent, requirements and design before implementation.