From anthropic-pack
Generates minimal Anthropic Claude Messages API examples in Python and TypeScript for text, vision, and streaming. Use for new integrations, setup testing, or basic API learning.
npx claudepluginhub jeremylongshore/claude-code-plugins-plus-skills --plugin anthropic-packThis skill is limited to using the following tools:
Three minimal examples covering the Claude Messages API core surfaces: basic text completion, vision (image analysis), and streaming responses.
Provides Claude API patterns for Python/TS: messages, streaming, tools, vision, caching, agents. Activates on anthropic/@anthropic-ai/sdk imports or API queries.
Chats with Anthropic Claude models (Opus, Sonnet, Haiku) via API. Supports 200K token contexts, vision, and tool use for code analysis or document summarization.
Installs Anthropic Claude SDK and configures API key authentication for Python and TypeScript. Verifies setup with test messages to Claude models.
Share bugs, ideas, or general feedback.
Three minimal examples covering the Claude Messages API core surfaces: basic text completion, vision (image analysis), and streaming responses.
anth-install-auth setupANTHROPIC_API_KEY in environmentanthropic package or Node.js 18+ with @anthropic-ai/sdkimport anthropic
client = anthropic.Anthropic()
message = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[
{"role": "user", "content": "Explain quantum computing in 3 sentences."}
]
)
# Response structure
print(message.content[0].text) # The actual text response
print(f"ID: {message.id}") # msg_01XFDUDYJgAACzvnptvVoYEL
print(f"Model: {message.model}") # claude-sonnet-4-20250514
print(f"Stop: {message.stop_reason}")# end_turn
print(f"Usage: {message.usage.input_tokens}in / {message.usage.output_tokens}out")
import Anthropic from '@anthropic-ai/sdk';
import * as fs from 'fs';
const client = new Anthropic();
// From file (base64)
const imageData = fs.readFileSync('chart.png').toString('base64');
const message = await client.messages.create({
model: 'claude-sonnet-4-20250514',
max_tokens: 1024,
messages: [{
role: 'user',
content: [
{
type: 'image',
source: {
type: 'base64',
media_type: 'image/png',
data: imageData,
},
},
{ type: 'text', text: 'Describe what this chart shows.' },
],
}],
});
console.log(message.content[0].type === 'text' ? message.content[0].text : '');
import anthropic
client = anthropic.Anthropic()
with client.messages.stream(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[{"role": "user", "content": "Write a haiku about APIs."}]
) as stream:
for text in stream.text_stream:
print(text, end="", flush=True)
# Get final message with full metadata
final = stream.get_final_message()
print(f"\nTokens used: {final.usage.input_tokens}+{final.usage.output_tokens}")
| Error | HTTP Code | Cause | Solution |
|---|---|---|---|
authentication_error | 401 | Invalid API key | Check ANTHROPIC_API_KEY |
invalid_request_error | 400 | Bad params (e.g., empty messages) | Validate request body |
rate_limit_error | 429 | Too many requests | Implement backoff (see anth-rate-limits) |
overloaded_error | 529 | API temporarily overloaded | Retry after 30-60s |
api_error | 500 | Server error | Retry with exponential backoff |
| Parameter | Required | Description |
|---|---|---|
model | Yes | Model ID: claude-sonnet-4-20250514, claude-haiku-4-20250514, claude-opus-4-20250514 |
max_tokens | Yes | Maximum output tokens (model-dependent max) |
messages | Yes | Array of {role, content} objects |
system | No | System prompt (string or content blocks) |
temperature | No | 0.0-1.0, default 1.0 |
top_p | No | Nucleus sampling (use temperature OR top_p) |
stop_sequences | No | Array of strings that stop generation |
stream | No | Enable SSE streaming |
Proceed to anth-local-dev-loop for development workflow setup.