OpenAI REST API integration guide. Use when: making direct HTTP calls to OpenAI API, understanding API structure without SDK, debugging API requests, learning request/response formats, handling errors and rate limits. Covers: authentication, Chat Completions, Embeddings, Images (DALL-E), Audio (Whisper/TTS), error handling, streaming.
/plugin marketplace add timequity/vibe-coder/plugin install vibe-coder@vibe-coderThis skill inherits all available tools. When active, it can use any tool Claude has access to.
references/authentication.mdreferences/chat-completions.mdreferences/errors.mdreferences/models.mdDirect HTTP integration with OpenAI API. For SDK usage, see openai-sdk skill.
Base URL: https://api.openai.com/v1
curl https://api.openai.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{...}'
Optional headers:
OpenAI-Organization: org-xxx — for multi-org accountsOpenAI-Project: proj-xxx — for project-specific billing| Endpoint | Method | Use Case |
|---|---|---|
/chat/completions | POST | Text generation, chat |
/embeddings | POST | Vector embeddings |
/images/generations | POST | DALL-E image creation |
/audio/transcriptions | POST | Whisper speech-to-text |
/audio/speech | POST | TTS text-to-speech |
/models | GET | List available models |
curl https://api.openai.com/v1/chat/completions \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
]
}'
Response structure:
{
"id": "chatcmpl-xxx",
"choices": [{
"message": {"role": "assistant", "content": "Hi! How can I help?"},
"finish_reason": "stop"
}],
"usage": {"prompt_tokens": 10, "completion_tokens": 8, "total_tokens": 18}
}
curl https://api.openai.com/v1/embeddings \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{"model": "text-embedding-3-small", "input": "Hello world"}'
Add "stream": true to request. Response is SSE:
data: {"choices":[{"delta":{"content":"Hello"}}]}
data: {"choices":[{"delta":{"content":" world"}}]}
data: [DONE]
For latest/complete docs, fetch:
https://cdn.openai.com/API/docs/txt/llms-api-reference.txthttps://cdn.openai.com/API/docs/txt/llms-guides.txtThis skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.