From openrouter-pack
Diagnoses and fixes OpenRouter API errors (401 auth, 402 credits, 429 rate limits, 400 bad request) using reference table, response format, and bash diagnostics.
npx claudepluginhub jeremylongshore/claude-code-plugins-plus-skills --plugin openrouter-packThis skill is limited to using the following tools:
OpenRouter returns standard HTTP error codes plus OpenRouter-specific error codes in the response body. The most common: 401 (auth), 402 (credits), 429 (rate limit), 400 (bad request), and 5xx (upstream provider errors). Each error includes a `code` field and a human-readable `message`. This skill covers every common error, its root cause, and the exact fix.
Fixes common OpenRouter API pitfalls: model provider prefixes, max_tokens limits, model validation, fallback providers. Use for integrations or code reviews.
Invoke OpenRouter CLI for chat completions, embeddings, rerank, video generation, API key management, model browsing, credits checks, and scripted LLM calls with stable JSON output from shell, scripts, and agents.
Diagnoses and fixes Anthropic Claude API errors by HTTP status code, covering invalid requests, authentication, permissions, rate limits, and overloads.
Share bugs, ideas, or general feedback.
OpenRouter returns standard HTTP error codes plus OpenRouter-specific error codes in the response body. The most common: 401 (auth), 402 (credits), 429 (rate limit), 400 (bad request), and 5xx (upstream provider errors). Each error includes a code field and a human-readable message. This skill covers every common error, its root cause, and the exact fix.
| HTTP | Error Code | Cause | Fix |
|---|---|---|---|
| 400 | bad_request | Malformed request body | Validate messages array format; ensure model ID includes provider prefix |
| 400 | invalid_model | Model ID not found | Check model exists: curl -s https://openrouter.ai/api/v1/models | jq '.data[].id' |
| 400 | context_length_exceeded | Prompt + max_tokens > model limit | Reduce prompt size or use a larger-context model |
| 400 | invalid_tool_schema | Tool definition has unsupported types | Use basic JSON Schema types only (string, number, boolean, object, array) |
| 401 | invalid_api_key | Key malformed, revoked, or wrong | Regenerate at openrouter.ai/keys; key must start with sk-or-v1- |
| 401 | missing_api_key | No Authorization header | Add Authorization: Bearer sk-or-v1-... header |
| 402 | insufficient_credits | Credit balance is zero | Top up at openrouter.ai/credits |
| 402 | credit_limit_reached | Per-key credit limit hit | Increase key limit in dashboard or create new key |
| 403 | key_disabled | Key was disabled by admin | Re-enable in dashboard or create new key |
| 408 | request_timeout | Model took too long | Reduce max_tokens; use streaming; try faster model |
| 429 | rate_limit_exceeded | Too many requests per interval | SDK auto-retries; increase max_retries; use multiple keys |
| 502 | provider_error | Upstream provider returned error | Retry with backoff; try different provider via provider.order |
| 503 | model_unavailable | Model temporarily offline | Use fallback models; check status.openrouter.ai |
{
"error": {
"code": 401,
"message": "Invalid API key. Please check your API key and try again.",
"metadata": {
"provider_name": "Anthropic",
"raw": "..."
}
}
}
#!/bin/bash
echo "=== OpenRouter Error Diagnostics ==="
# 1. Test authentication
echo -n "1. Auth: "
AUTH=$(curl -s -o /dev/null -w "%{http_code}" \
https://openrouter.ai/api/v1/auth/key \
-H "Authorization: Bearer $OPENROUTER_API_KEY")
[ "$AUTH" = "200" ] && echo "OK" || echo "FAIL (HTTP $AUTH)"
# 2. Check credit balance
echo -n "2. Credits: "
CREDITS=$(curl -s https://openrouter.ai/api/v1/auth/key \
-H "Authorization: Bearer $OPENROUTER_API_KEY" | \
jq -r '(.data.limit // 0) - .data.usage')
echo "\$$CREDITS remaining"
# 3. Test model availability
echo -n "3. Model: "
MODEL="openai/gpt-4o-mini"
EXISTS=$(curl -s https://openrouter.ai/api/v1/models | \
jq --arg m "$MODEL" '[.data[] | select(.id == $m)] | length')
[ "$EXISTS" -gt 0 ] && echo "$MODEL available" || echo "$MODEL NOT FOUND"
# 4. Test a minimal request
echo -n "4. Request: "
RESP=$(curl -s -w "\n%{http_code}" \
https://openrouter.ai/api/v1/chat/completions \
-H "Authorization: Bearer $OPENROUTER_API_KEY" \
-H "Content-Type: application/json" \
-d '{"model":"openai/gpt-4o-mini","messages":[{"role":"user","content":"hi"}],"max_tokens":1}')
HTTP=$(echo "$RESP" | tail -1)
[ "$HTTP" = "200" ] && echo "OK" || echo "FAIL (HTTP $HTTP)"
import os
from openai import OpenAI, APIError, AuthenticationError, RateLimitError, BadRequestError, APITimeoutError
client = OpenAI(
base_url="https://openrouter.ai/api/v1",
api_key=os.environ["OPENROUTER_API_KEY"],
max_retries=3, # Auto-retry 429 and 5xx
timeout=30.0,
default_headers={"HTTP-Referer": "https://my-app.com", "X-Title": "my-app"},
)
def safe_completion(messages, model="openai/gpt-4o-mini", **kwargs):
"""Completion with categorized error handling."""
try:
return client.chat.completions.create(
model=model, messages=messages, **kwargs
)
except AuthenticationError as e:
# 401: Bad or missing API key
raise SystemExit(f"AUTH ERROR: Check OPENROUTER_API_KEY. {e}")
except BadRequestError as e:
# 400: Bad model ID, invalid params, context too long
if "context_length" in str(e):
raise ValueError(f"Prompt too long for {model}. Trim or use larger-context model.")
raise ValueError(f"Bad request: {e}")
except RateLimitError:
# 429: SDK already retried max_retries times
raise RuntimeError("Rate limited after all retries. Wait or use more API keys.")
except APITimeoutError:
# Timeout: model too slow
raise TimeoutError(f"Model {model} timed out. Try streaming or a faster model.")
except APIError as e:
# 402, 5xx, other
if e.status_code == 402:
raise RuntimeError("Insufficient credits. Top up at openrouter.ai/credits")
raise RuntimeError(f"API error {e.status_code}: {e}")
import requests
def validate_before_send(model: str, messages: list, max_tokens: int = 1024):
"""Pre-flight validation to catch common mistakes before API call."""
errors = []
# Check model exists
models = requests.get("https://openrouter.ai/api/v1/models").json()["data"]
model_ids = {m["id"] for m in models}
if model not in model_ids:
# Try to suggest correct ID
prefix = model.split("/")[0] if "/" in model else ""
suggestions = [m for m in model_ids if prefix and m.startswith(prefix)][:3]
errors.append(f"Model '{model}' not found. Did you mean: {suggestions}")
# Check messages format
if not messages or not isinstance(messages, list):
errors.append("messages must be a non-empty list")
for msg in messages:
if "role" not in msg or "content" not in msg:
errors.append(f"Each message needs 'role' and 'content': {msg}")
# Estimate context usage
total_chars = sum(len(str(m.get("content", ""))) for m in messages)
est_tokens = total_chars // 4
model_info = next((m for m in models if m["id"] == model), None)
if model_info:
ctx_limit = model_info["context_length"]
if est_tokens + max_tokens > ctx_limit:
errors.append(f"Estimated {est_tokens} + {max_tokens} max_tokens > {ctx_limit} context limit")
if errors:
raise ValueError("Pre-flight validation failed:\n" + "\n".join(f" - {e}" for e in errors))
| Scenario | SDK Behavior | Your Action |
|---|---|---|
| 429 rate limit | Auto-retries with backoff | Increase max_retries or add keys |
| 5xx server error | Auto-retries with backoff | Increase max_retries; add fallback models |
| 401 auth error | Fails immediately (no retry) | Fix API key and retry |
| 400 bad request | Fails immediately (no retry) | Fix request parameters |
| 402 no credits | Fails immediately (no retry) | Top up credits |
max_retries (default 2, recommend 3-5)