OpenAI API via curl. Use this skill for GPT chat completions, DALL-E image generation, Whisper audio transcription, embeddings, and text-to-speech.
/plugin marketplace add vm0-ai/api0/plugin install api0@api0This skill inherits all available tools. When active, it can use any tool Claude has access to.
Use the OpenAI API via direct curl calls to access GPT models, DALL-E image generation, Whisper transcription, embeddings, and text-to-speech.
Official docs:
https://platform.openai.com/docs/api-reference
Use this skill when you need to:
export OPENAI_API_KEY="sk-..."
| Model | Input (per 1M tokens) | Output (per 1M tokens) |
|---|---|---|
| GPT-4o | $2.50 | $10.00 |
| GPT-4o-mini | $0.15 | $0.60 |
| GPT-4 Turbo | $10.00 | $30.00 |
| text-embedding-3-small | $0.02 | - |
| text-embedding-3-large | $0.13 | - |
Rate limits vary by tier (based on usage history). Check your limits at Platform Settings.
Important: When using
$VARin a command that pipes to another command, wrap the command containing$VARinbash -c '...'. Due to a Claude Code bug, environment variables are silently cleared when pipes are used directly.bash -c 'curl -s "https://api.example.com" -H "Authorization: Bearer $API_KEY"' | jq .
All examples below assume you have OPENAI_API_KEY set.
Base URL: https://api.openai.com/v1
Send a simple chat message:
bash -c 'curl -s "https://api.openai.com/v1/chat/completions" -H "Content-Type: application/json" -H "Authorization: Bearer ${OPENAI_API_KEY}" -d '"'"'{"model": "gpt-4o-mini", "messages": [{"role": "user", "content": "Hello, who are you?"}]}'"'"'' | jq -r '.choices[0].message.content
Available models:
gpt-4o: Latest flagship model (128K context)gpt-4o-mini: Fast and affordable (128K context)gpt-4-turbo: Previous generation (128K context)gpt-3.5-turbo: Legacy model (16K context)o1: Reasoning model for complex taskso1-mini: Smaller reasoning modelUse a system message to set behavior:
bash -c 'curl -s "https://api.openai.com/v1/chat/completions" -H "Content-Type: application/json" -H "Authorization: Bearer ${OPENAI_API_KEY}" -d '"'"'{"model": "gpt-4o-mini", "messages": [{"role": "system", "content": "You are a helpful assistant that responds in JSON format."}, {"role": "user", "content": "List 3 programming languages with their main use cases."}]}'"'"'' | jq -r '.choices[0].message.content
Get real-time token-by-token output:
curl -s "https://api.openai.com/v1/chat/completions" -H "Content-Type: application/json" -H "Authorization: Bearer ${OPENAI_API_KEY}" -d '{"model": "gpt-4o-mini", "messages": [{"role": "user", "content": "Write a haiku about programming."}], "stream": true}'
Streaming returns Server-Sent Events (SSE) with delta chunks.
Force the model to return valid JSON:
bash -c 'curl -s "https://api.openai.com/v1/chat/completions" -H "Content-Type: application/json" -H "Authorization: Bearer ${OPENAI_API_KEY}" -d '"'"'{"model": "gpt-4o-mini", "messages": [{"role": "system", "content": "Return JSON only."}, {"role": "user", "content": "Give me info about Paris: name, country, population."}], "response_format": {"type": "json_object"}}'"'"'' | jq -r '.choices[0].message.content' | jq .
Analyze an image with GPT-4o:
bash -c 'curl -s "https://api.openai.com/v1/chat/completions" -H "Content-Type: application/json" -H "Authorization: Bearer ${OPENAI_API_KEY}" -d '"'"'{"model": "gpt-4o-mini", "messages": [{"role": "user", "content": [{"type": "text", "text": "What is in this image?"}, {"type": "image_url", "image_url": {"url": "https://upload.wikimedia.org/wikipedia/commons/thumb/3/3a/Cat03.jpg/1200px-Cat03.jpg"}}]}], "max_tokens": 300}'"'"'' | jq -r '.choices[0].message.content
Define functions the model can call:
bash -c 'curl -s "https://api.openai.com/v1/chat/completions" -H "Content-Type: application/json" -H "Authorization: Bearer ${OPENAI_API_KEY}" -d '"'"'{"model": "gpt-4o-mini", "messages": [{"role": "user", "content": "What is the weather in Tokyo?"}], "tools": [{"type": "function", "function": {"name": "get_weather", "description": "Get current weather for a location", "parameters": {"type": "object", "properties": {"location": {"type": "string", "description": "City name"}}, "required": ["location"]}}}]}'"'"'' | jq '.choices[0].message.tool_calls
Create vector embeddings for text:
bash -c 'curl -s "https://api.openai.com/v1/embeddings" -H "Content-Type: application/json" -H "Authorization: Bearer ${OPENAI_API_KEY}" -d '"'"'{"model": "text-embedding-3-small", "input": "The quick brown fox jumps over the lazy dog."}'"'"'' | jq '.data[0].embedding[:5]
Embedding models:
text-embedding-3-small: 1536 dimensions, fastesttext-embedding-3-large: 3072 dimensions, most capableCreate an image from text:
bash -c 'curl -s "https://api.openai.com/v1/images/generations" -H "Content-Type: application/json" -H "Authorization: Bearer ${OPENAI_API_KEY}" -d '"'"'{"model": "dall-e-3", "prompt": "A white cat sitting on a windowsill, digital art", "n": 1, "size": "1024x1024"}'"'"'' | jq -r '.data[0].url
Parameters:
size: 1024x1024, 1792x1024, or 1024x1792quality: standard or hdstyle: vivid or naturalTranscribe audio to text:
bash -c 'curl -s "https://api.openai.com/v1/audio/transcriptions" -H "Authorization: Bearer ${OPENAI_API_KEY}" -F "file=@audio.mp3" -F "model=whisper-1"' | jq -r '.text
Supports: mp3, mp4, mpeg, mpga, m4a, wav, webm (max 25MB).
Generate audio from text:
curl -s "https://api.openai.com/v1/audio/speech" -H "Content-Type: application/json" -H "Authorization: Bearer ${OPENAI_API_KEY}" -d '{"model": "tts-1", "input": "Hello! This is a test of OpenAI text to speech.", "voice": "alloy"}' --output speech.mp3
Voices: alloy, echo, fable, onyx, nova, shimmer
Models: tts-1 (fast), tts-1-hd (high quality)
Get all available models:
bash -c 'curl -s "https://api.openai.com/v1/models" -H "Authorization: Bearer ${OPENAI_API_KEY}"' | jq -r '.data[].id' | sort | head -20
Extract usage from response:
bash -c 'curl -s "https://api.openai.com/v1/chat/completions" -H "Content-Type: application/json" -H "Authorization: Bearer ${OPENAI_API_KEY}" -d '"'"'{"model": "gpt-4o-mini", "messages": [{"role": "user", "content": "Hi!"}]}'"'"'' | jq '.usage
Response includes:
prompt_tokens: Input token countcompletion_tokens: Output token counttotal_tokens: Sum of bothgpt-4o-mini for most tasks, gpt-4o for complex reasoning, o1 for advanced math/codingresponse_formatgpt-4o and gpt-4o-mini support image input