From anthropic-pack
Create a minimal working Anthropic Claude Messages API example. Use when starting a new Claude integration, testing your setup, or learning basic Messages API patterns for text, vision, and streaming. Trigger with phrases like "anthropic hello world", "claude api example", "anthropic quick start", "simple claude code", "first messages api call".
npx claudepluginhub flight505/skill-forge --plugin anthropic-packThis skill is limited to using the following tools:
Three minimal examples covering the Claude Messages API core surfaces: basic text completion, vision (image analysis), and streaming responses.
Guides Next.js Cache Components and Partial Prerendering (PPR): 'use cache' directives, cacheLife(), cacheTag(), revalidateTag() for caching, invalidation, static/dynamic optimization. Auto-activates on cacheComponents: true.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Share bugs, ideas, or general feedback.
Three minimal examples covering the Claude Messages API core surfaces: basic text completion, vision (image analysis), and streaming responses.
anth-install-auth setupANTHROPIC_API_KEY in environmentanthropic package or Node.js 18+ with @anthropic-ai/sdkimport anthropic
client = anthropic.Anthropic()
message = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[
{"role": "user", "content": "Explain quantum computing in 3 sentences."}
]
)
# Response structure
print(message.content[0].text) # The actual text response
print(f"ID: {message.id}") # msg_01XFDUDYJgAACzvnptvVoYEL
print(f"Model: {message.model}") # claude-sonnet-4-20250514
print(f"Stop: {message.stop_reason}")# end_turn
print(f"Usage: {message.usage.input_tokens}in / {message.usage.output_tokens}out")
import Anthropic from '@anthropic-ai/sdk';
import * as fs from 'fs';
const client = new Anthropic();
// From file (base64)
const imageData = fs.readFileSync('chart.png').toString('base64');
const message = await client.messages.create({
model: 'claude-sonnet-4-20250514',
max_tokens: 1024,
messages: [{
role: 'user',
content: [
{
type: 'image',
source: {
type: 'base64',
media_type: 'image/png',
data: imageData,
},
},
{ type: 'text', text: 'Describe what this chart shows.' },
],
}],
});
console.log(message.content[0].type === 'text' ? message.content[0].text : '');
import anthropic
client = anthropic.Anthropic()
with client.messages.stream(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[{"role": "user", "content": "Write a haiku about APIs."}]
) as stream:
for text in stream.text_stream:
print(text, end="", flush=True)
# Get final message with full metadata
final = stream.get_final_message()
print(f"\nTokens used: {final.usage.input_tokens}+{final.usage.output_tokens}")
| Error | HTTP Code | Cause | Solution |
|---|---|---|---|
authentication_error | 401 | Invalid API key | Check ANTHROPIC_API_KEY |
invalid_request_error | 400 | Bad params (e.g., empty messages) | Validate request body |
rate_limit_error | 429 | Too many requests | Implement backoff (see anth-rate-limits) |
overloaded_error | 529 | API temporarily overloaded | Retry after 30-60s |
api_error | 500 | Server error | Retry with exponential backoff |
| Parameter | Required | Description |
|---|---|---|
model | Yes | Model ID: claude-sonnet-4-20250514, claude-haiku-4-20250514, claude-opus-4-20250514 |
max_tokens | Yes | Maximum output tokens (model-dependent max) |
messages | Yes | Array of {role, content} objects |
system | No | System prompt (string or content blocks) |
temperature | No | 0.0-1.0, default 1.0 |
top_p | No | Nucleus sampling (use temperature OR top_p) |
stop_sequences | No | Array of strings that stop generation |
stream | No | Enable SSE streaming |
Proceed to anth-local-dev-loop for development workflow setup.