npx claudepluginhub joshuarweaver/cascade-ai-ml-agents-misc-1 --plugin aradotso-trending-skills-37This skill uses the workspace's default tool permissions.
> Skill by [ara.so](https://ara.so) — Daily 2026 Skills collection.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
Skill by ara.so — Daily 2026 Skills collection.
OpenClaude is a fork of Claude Code that routes all LLM calls through an OpenAI-compatible shim (openaiShim.ts), letting you use any model that speaks the OpenAI Chat Completions API — GPT-4o, DeepSeek, Gemini via OpenRouter, Ollama, Groq, Mistral, Azure, and more — while keeping every Claude Code tool intact (Bash, FileRead, FileWrite, FileEdit, Glob, Grep, WebFetch, Agent, MCP, Tasks, LSP, NotebookEdit).
npm install -g @gitlawb/openclaude
# CLI command installed: openclaude
git clone https://node.gitlawb.com/z6MkqDnb7Siv3Cwj7pGJq4T5EsUisECqR8KpnDLwcaZq5TPr/openclaude.git
cd openclaude
bun install
bun run build
# optionally link globally
npm link
bun run dev # run directly with Bun, no build step
You must set CLAUDE_CODE_USE_OPENAI=1 to enable the shim. Without it, the tool falls back to the Anthropic SDK.
| Variable | Required | Purpose |
|---|---|---|
CLAUDE_CODE_USE_OPENAI | Yes | Set to 1 to activate OpenAI provider |
OPENAI_API_KEY | Yes* | API key (*omit for local Ollama/LM Studio) |
OPENAI_MODEL | Yes | Model identifier |
OPENAI_BASE_URL | No | Custom endpoint (default: https://api.openai.com/v1) |
CODEX_API_KEY | Codex only | ChatGPT/Codex access token |
CODEX_AUTH_JSON_PATH | Codex only | Path to Codex CLI auth.json |
OPENAI_MODEL takes priority over ANTHROPIC_MODEL if both are set.
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=$OPENAI_API_KEY
export OPENAI_MODEL=gpt-4o
openclaude
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=$DEEPSEEK_API_KEY
export OPENAI_BASE_URL=https://api.deepseek.com/v1
export OPENAI_MODEL=deepseek-chat
openclaude
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=$OPENROUTER_API_KEY
export OPENAI_BASE_URL=https://openrouter.ai/api/v1
export OPENAI_MODEL=google/gemini-2.0-flash
openclaude
ollama pull llama3.3:70b
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_BASE_URL=http://localhost:11434/v1
export OPENAI_MODEL=llama3.3:70b
openclaude
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=$GROQ_API_KEY
export OPENAI_BASE_URL=https://api.groq.com/openai/v1
export OPENAI_MODEL=llama-3.3-70b-versatile
openclaude
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=$MISTRAL_API_KEY
export OPENAI_BASE_URL=https://api.mistral.ai/v1
export OPENAI_MODEL=mistral-large-latest
openclaude
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=$AZURE_OPENAI_KEY
export OPENAI_BASE_URL=https://your-resource.openai.azure.com/openai/deployments/your-deployment/v1
export OPENAI_MODEL=gpt-4o
openclaude
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_MODEL=codexplan # or codexspark for faster loops
# reads ~/.codex/auth.json automatically if present
# or set: export CODEX_API_KEY=$CODEX_TOKEN
openclaude
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_BASE_URL=http://localhost:1234/v1
export OPENAI_MODEL=your-model-name
openclaude
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=$TOGETHER_API_KEY
export OPENAI_BASE_URL=https://api.together.xyz/v1
export OPENAI_MODEL=meta-llama/Llama-3.3-70B-Instruct-Turbo
openclaude
The shim file is src/services/api/openaiShim.ts (724 lines). It duck-types the Anthropic SDK interface so the rest of Claude Code is unaware it's talking to a different provider.
Claude Code Tool System
│
▼
Anthropic SDK interface (duck-typed)
│
▼
openaiShim.ts ← format translation layer
│
▼
OpenAI Chat Completions API
│
▼
Any compatible model
messages arraytool_use / tool_result blocks → OpenAI function_calls / tool messagessystem role messagessrc/services/api/openaiShim.ts ← NEW: the shim (724 lines)
src/services/api/client.ts ← routes to shim when CLAUDE_CODE_USE_OPENAI=1
src/utils/model/providers.ts ← added 'openai' provider type
src/utils/model/configs.ts ← added openai model mappings
src/utils/model/model.ts ← respects OPENAI_MODEL for defaults
src/utils/auth.ts ← recognizes OpenAI as valid 3rd-party provider
# Run in dev mode (no build)
bun run dev
# Build distribution
bun run build
# Launch with persisted profile (.openclaude-profile.json)
bun run dev:profile
# Launch with OpenAI profile (requires OPENAI_API_KEY in shell)
bun run dev:openai
# Launch with Ollama profile (localhost:11434, llama3.1:8b default)
bun run dev:ollama
# Launch with Codex profile
bun run dev:codex
# Quick startup sanity check
bun run smoke
# Validate provider env + reachability
bun run doctor:runtime
# Machine-readable runtime diagnostics
bun run doctor:runtime:json
# Persist diagnostics report to reports/doctor-runtime.json
bun run doctor:report
# Full local hardening check (typecheck + smoke + runtime doctor)
bun run hardening:check
# Strict hardening (includes project-wide typecheck)
bun run hardening:strict
Profiles save provider config to .openclaude-profile.json so you don't repeat env exports.
# Auto-detect provider (ollama if running, otherwise openai)
bun run profile:init
# Bootstrap for OpenAI
bun run profile:init -- --provider openai --api-key $OPENAI_API_KEY
# Bootstrap for Ollama with custom model
bun run profile:init -- --provider ollama --model llama3.1:8b
# Bootstrap for Codex
bun run profile:init -- --provider codex --model codexspark
bun run profile:codex
After bootstrapping, run the app via the persisted profile:
bun run dev:profile
If you want to use the shim in your own TypeScript code:
// src/services/api/client.ts pattern — routing to the shim
import { openaiShim } from './openaiShim.js';
const useOpenAI = process.env.CLAUDE_CODE_USE_OPENAI === '1';
const client = useOpenAI
? openaiShim({
apiKey: process.env.OPENAI_API_KEY,
baseURL: process.env.OPENAI_BASE_URL ?? 'https://api.openai.com/v1',
model: process.env.OPENAI_MODEL ?? 'gpt-4o',
})
: new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });
// Streaming usage pattern (mirrors Anthropic SDK interface)
const stream = await client.messages.stream({
model: process.env.OPENAI_MODEL!,
max_tokens: 32000,
system: 'You are a helpful coding assistant.',
messages: [
{ role: 'user', content: 'Refactor this function for readability.' }
],
tools: myTools, // Anthropic-format tool definitions — shim translates them
});
for await (const event of stream) {
// events arrive in Anthropic format regardless of underlying provider
if (event.type === 'content_block_delta') {
process.stdout.write(event.delta.text ?? '');
}
}
| Model | Tool Calling | Code Quality | Speed |
|---|---|---|---|
| GPT-4o | Excellent | Excellent | Fast |
| DeepSeek-V3 | Great | Great | Fast |
| Gemini 2.0 Flash | Great | Good | Very Fast |
| Llama 3.3 70B | Good | Good | Medium |
| Mistral Large | Good | Good | Fast |
| GPT-4o-mini | Good | Good | Very Fast |
| Qwen 2.5 72B | Good | Good | Medium |
| Models < 7B | Limited | Limited | Very Fast |
For agentic multi-step tool use, prefer models with strong native function/tool calling (GPT-4o, DeepSeek-V3, Gemini 2.0 Flash).
/commit, /review, /compact, /diff, /doctordoctor:runtime fails with placeholder key errorError: OPENAI_API_KEY looks like a placeholder (SUA_CHAVE)
Set a real key: export OPENAI_API_KEY=$YOUR_ACTUAL_KEY
Ensure Ollama is running before launching:
ollama serve &
ollama pull llama3.3:70b
bun run dev:ollama
Switch to a model with strong tool calling support (GPT-4o, DeepSeek-V3). Models under 7B parameters often fail at multi-step agentic tool use.
The OPENAI_BASE_URL for Azure must include the deployment path:
https://<resource>.openai.azure.com/openai/deployments/<deployment>/v1
If ~/.codex/auth.json doesn't exist, set the token directly:
export CODEX_API_KEY=$YOUR_CODEX_TOKEN
Or point to a custom auth file:
export CODEX_AUTH_JSON_PATH=/path/to/auth.json
bun run doctor:runtime # human-readable
bun run doctor:runtime:json # machine-readable JSON
bun run doctor:report # saves to reports/doctor-runtime.json