Build a multi-provider LLM client abstraction layer for Rails applications. Use when integrating multiple LLM providers (OpenAI, Anthropic, Gemini, Ollama), implementing provider switching, feature-based model routing, or standardizing LLM responses across providers.
Builds a unified LLM client abstraction layer for Rails applications with multi-provider support, factory pattern, and standardized responses.
/plugin marketplace add rbarazi/agent-skills/plugin install agentify-skills@agentify-skillsThis skill inherits all available tools. When active, it can use any tool Claude has access to.
references/01-gateway-factory.mdreferences/02-base-client.mdreferences/03-configuration.mdreferences/04-provider-clients.mdreferences/05-error-handling.mdreferences/06-model-sync.mdreferences/07-usage-cost.mdreferences/08-rails-adapter.mdA unified interface for working with multiple LLM providers through a factory pattern, YAML-driven configuration, and provider-specific client implementations.
┌─────────────────────┐
│ LLMGateway │
│ (Factory Class) │
└─────────┬───────────┘
│ create(provider:, api_key:)
▼
┌─────────────────────┐
│ LLMConfig │
│ (YAML Loader) │
└─────────┬───────────┘
│
┌───────────┬─────────┴─────────┬───────────┐
▼ ▼ ▼ ▼
┌─────────┐ ┌──────────┐ ┌──────────┐ ┌─────────┐
│ OpenAI │ │Anthropic │ ... │ Gemini │ │ Ollama │
│ Client │ │ Client │ │ Client │ │ Client │
└────┬────┘ └────┬─────┘ └────┬─────┘ └────┬────┘
│ │ │ │
└───────────┴────────┬─────────┴────────────┘
▼
┌─────────────────────┐
│ LLMClient │
│ (Base Class) │
│ - LLMResponse │
│ - FEATURES │
│ - Retry Logic │
└─────────────────────┘
# 1. Create a client through the gateway
client = LLMGateway.create(provider: :openai, api_key: ENV['OPENAI_API_KEY'])
# 2. Send a message
response = client.create_message(
system: "You are a helpful assistant",
model: "gpt-4o",
limit: 1000,
messages: [{ role: "user", content: "Hello!" }]
)
# 3. Access the standardized response
puts response.content # "Hello! How can I help you?"
puts response.finish_reason # "stop"
puts response.usage # { "prompt_tokens" => 10, "completion_tokens" => 8 }
# Simple creation
client = LLMGateway.create(provider: :anthropic, api_key: api_key)
# Model-aware creation (routes to correct API variant)
client = LLMGateway.create_for_model(
provider: :openai,
model_name: "gpt-4o",
api_key: api_key
)
# config/llm_models.yml
providers:
openai:
name: OpenAI
client_class: OpenAIClient
models:
gpt-4o:
model: gpt-4o
features: [vision, function_calling, multimodal]
context_length: 128000
pricing: { input: 0.0025, output: 0.01 }
# Check model features
LLMConfig.supports_vision?(:openai, "gpt-4o") # => true
LLMConfig.supports_function_calling?(:openai, "gpt-4o") # => true
# Find models by feature
LLMConfig.models_with_feature(:embeddings)
LLMConfig.cheapest_model_with_features(:openai, ["vision", "function_calling"])
Ideal for:
Consider alternatives if:
When implementation is complete, verify:
supports_vision?, supports_function_calling?error_type, error_code fieldspricing, features, description before API callsmax_completion_tokens, not max_tokensfeatures array for detectionprompt_tokens vs input_tokens vs promptTokenCountcheapest_model_with_features selectionDetailed implementation guides: