From anthropic-pack
Migrate to Claude API from OpenAI, Gemini, or other LLM providers. Use when switching from GPT-4 to Claude, migrating from Text Completions, or building a multi-provider abstraction layer. Trigger with phrases like "migrate to claude", "openai to anthropic", "switch from gpt to claude", "multi-provider llm".
npx claudepluginhub flight505/skill-forge --plugin anthropic-packThis skill is limited to using the following tools:
Migration strategies for switching to Claude from OpenAI, Google, or other LLM providers, including API mapping, prompt translation, and multi-provider abstraction.
Guides Next.js Cache Components and Partial Prerendering (PPR): 'use cache' directives, cacheLife(), cacheTag(), revalidateTag() for caching, invalidation, static/dynamic optimization. Auto-activates on cacheComponents: true.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Share bugs, ideas, or general feedback.
Migration strategies for switching to Claude from OpenAI, Google, or other LLM providers, including API mapping, prompt translation, and multi-provider abstraction.
| OpenAI | Anthropic | Notes |
|---|---|---|
openai.ChatCompletion.create() | anthropic.messages.create() | Different response shape |
model: "gpt-4" | model: "claude-sonnet-4-20250514" | Different model IDs |
messages: [{role, content}] | messages: [{role, content}] | Same format |
functions / tools | tools | Similar but different schema key names |
function_call | tool_choice | Different naming |
response.choices[0].message.content | response.content[0].text | Different access path |
stream: true → yields chunks | stream: true → SSE events | Different event format |
System message in messages[] | system parameter (separate) | Claude separates system prompt |
n (multiple completions) | Not supported | Use multiple requests |
logprobs | Not supported | N/A |
# === OpenAI ===
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4",
messages=[
{"role": "system", "content": "You are helpful."},
{"role": "user", "content": "Hello"}
],
max_tokens=1024,
temperature=0.7
)
text = response.choices[0].message.content
# === Anthropic ===
import anthropic
client = anthropic.Anthropic()
response = client.messages.create(
model="claude-sonnet-4-20250514",
system="You are helpful.", # System prompt is separate
messages=[
{"role": "user", "content": "Hello"}
],
max_tokens=1024, # Required (not optional)
temperature=0.7
)
text = response.content[0].text
# OpenAI tools format
openai_tools = [{
"type": "function",
"function": {
"name": "get_weather",
"parameters": {"type": "object", "properties": {"city": {"type": "string"}}}
}
}]
# Anthropic tools format — flatter structure
anthropic_tools = [{
"name": "get_weather",
"description": "Get weather for a city", # Required in Anthropic
"input_schema": {"type": "object", "properties": {"city": {"type": "string"}}}
}]
from abc import ABC, abstractmethod
class LLMProvider(ABC):
@abstractmethod
def complete(self, prompt: str, system: str = "", **kwargs) -> str: ...
class AnthropicProvider(LLMProvider):
def __init__(self):
import anthropic
self.client = anthropic.Anthropic()
def complete(self, prompt: str, system: str = "", **kwargs) -> str:
msg = self.client.messages.create(
model=kwargs.get("model", "claude-sonnet-4-20250514"),
max_tokens=kwargs.get("max_tokens", 1024),
system=system,
messages=[{"role": "user", "content": prompt}]
)
return msg.content[0].text
class OpenAIProvider(LLMProvider):
def __init__(self):
from openai import OpenAI
self.client = OpenAI()
def complete(self, prompt: str, system: str = "", **kwargs) -> str:
messages = []
if system:
messages.append({"role": "system", "content": system})
messages.append({"role": "user", "content": prompt})
resp = self.client.chat.completions.create(
model=kwargs.get("model", "gpt-4"),
messages=messages,
max_tokens=kwargs.get("max_tokens", 1024)
)
return resp.choices[0].message.content
messages[] to system parameter.choices[0].message.content → .content[0].text)max_tokens explicit (required in Anthropic, optional in OpenAI)For advanced debugging, see anth-advanced-troubleshooting.