From anthropic-pack
Guides migration from OpenAI, Gemini, or other LLMs to Anthropic Claude API with mappings, Python code examples, tool use translation, and multi-provider abstraction.
npx claudepluginhub jeremylongshore/claude-code-plugins-plus-skills --plugin anthropic-packThis skill is limited to using the following tools:
Migration strategies for switching to Claude from OpenAI, Google, or other LLM providers, including API mapping, prompt translation, and multi-provider abstraction.
Migrates Python/TypeScript code from direct OpenAI/Anthropic APIs to OpenRouter: updates base_url, api_key, headers, model IDs, and response parsing.
Migrates OpenAI, Anthropic, or other LLM SDK code to Groq with model mappings, API diffs, and provider abstraction for zero-downtime traffic shifting.
Upgrades Anthropic SDK in Python/TypeScript and migrates from Text Completions to Messages API, plus new features like tools, streaming, and batches.
Share bugs, ideas, or general feedback.
Migration strategies for switching to Claude from OpenAI, Google, or other LLM providers, including API mapping, prompt translation, and multi-provider abstraction.
| OpenAI | Anthropic | Notes |
|---|---|---|
openai.ChatCompletion.create() | anthropic.messages.create() | Different response shape |
model: "gpt-4" | model: "claude-sonnet-4-20250514" | Different model IDs |
messages: [{role, content}] | messages: [{role, content}] | Same format |
functions / tools | tools | Similar but different schema key names |
function_call | tool_choice | Different naming |
response.choices[0].message.content | response.content[0].text | Different access path |
stream: true → yields chunks | stream: true → SSE events | Different event format |
System message in messages[] | system parameter (separate) | Claude separates system prompt |
n (multiple completions) | Not supported | Use multiple requests |
logprobs | Not supported | N/A |
# === OpenAI ===
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4",
messages=[
{"role": "system", "content": "You are helpful."},
{"role": "user", "content": "Hello"}
],
max_tokens=1024,
temperature=0.7
)
text = response.choices[0].message.content
# === Anthropic ===
import anthropic
client = anthropic.Anthropic()
response = client.messages.create(
model="claude-sonnet-4-20250514",
system="You are helpful.", # System prompt is separate
messages=[
{"role": "user", "content": "Hello"}
],
max_tokens=1024, # Required (not optional)
temperature=0.7
)
text = response.content[0].text
# OpenAI tools format
openai_tools = [{
"type": "function",
"function": {
"name": "get_weather",
"parameters": {"type": "object", "properties": {"city": {"type": "string"}}}
}
}]
# Anthropic tools format — flatter structure
anthropic_tools = [{
"name": "get_weather",
"description": "Get weather for a city", # Required in Anthropic
"input_schema": {"type": "object", "properties": {"city": {"type": "string"}}}
}]
from abc import ABC, abstractmethod
class LLMProvider(ABC):
@abstractmethod
def complete(self, prompt: str, system: str = "", **kwargs) -> str: ...
class AnthropicProvider(LLMProvider):
def __init__(self):
import anthropic
self.client = anthropic.Anthropic()
def complete(self, prompt: str, system: str = "", **kwargs) -> str:
msg = self.client.messages.create(
model=kwargs.get("model", "claude-sonnet-4-20250514"),
max_tokens=kwargs.get("max_tokens", 1024),
system=system,
messages=[{"role": "user", "content": prompt}]
)
return msg.content[0].text
class OpenAIProvider(LLMProvider):
def __init__(self):
from openai import OpenAI
self.client = OpenAI()
def complete(self, prompt: str, system: str = "", **kwargs) -> str:
messages = []
if system:
messages.append({"role": "system", "content": system})
messages.append({"role": "user", "content": prompt})
resp = self.client.chat.completions.create(
model=kwargs.get("model", "gpt-4"),
messages=messages,
max_tokens=kwargs.get("max_tokens", 1024)
)
return resp.choices[0].message.content
messages[] to system parameter.choices[0].message.content → .content[0].text)max_tokens explicit (required in Anthropic, optional in OpenAI)For advanced debugging, see anth-advanced-troubleshooting.