Use when creating llmring.lock file for new project (REQUIRED for all applications), configuring model aliases with semantic task-based names, managing environment-specific profiles (dev/staging/prod), or setting up fallback models - lockfile creation is mandatory first step, bundled lockfile is only for llmring tools
Creates and manages llmring.lock files with semantic aliases, profiles, and fallback models. Use when initializing new projects or configuring model bindings before using aliases in code.
/plugin marketplace add juanre/llmring/plugin install llmring@juanre-ai-toolsThis skill inherits all available tools. When active, it can use any tool Claude has access to.
# With uv (recommended)
uv add llmring
# With pip
pip install llmring
You MUST create your own llmring.lock for:
The bundled lockfile that ships with llmring is ONLY for running llmring lock chat. It provides the "advisor" alias so the configuration assistant works. It is NOT for your application.
This skill covers:
llmring.lock) structure and resolution# REQUIRED: Create lockfile in your project
llmring lock init
# BEFORE binding aliases, check available models:
llmring list --provider openai
llmring list --provider anthropic
llmring list --provider google
# THEN bind semantic aliases using CURRENT model names:
llmring bind summarizer "anthropic:claude-3-5-haiku-20241022"
llmring bind analyzer "openai:gpt-4o"
# Or use conversational configuration (recommended - knows current models)
llmring lock chat
⚠️ Important: Always check llmring list --provider <name> for current model names before binding. Model names change frequently (e.g., claude-sonnet-4-5-20250929 → claude-sonnet-4-5-20250929).
Using aliases in code:
from llmring import LLMRing, LLMRequest, Message
async with LLMRing() as service:
# Use YOUR semantic alias (defined in llmring.lock)
request = LLMRequest(
model="summarizer", # Resolves to model you configured
messages=[Message(role="user", content="Hello")]
)
response = await service.chat(request)
Use domain-specific semantic names:
"summarizer" - Clear what it does"code-reviewer" - Describes purpose"extractor" - Self-documenting"sql-generator" - Intent is obviousAvoid generic performance descriptors:
"fast", "balanced", "deep" - Don't describe the taskGeneric names like "fast" appear in examples for illustration only. Real applications should use names that describe the task, not model characteristics.
LLMRing searches for lockfiles in this order:
lockfile_path parameter (must exist)LLMRING_LOCKFILE_PATH (must exist)./llmring.lock (if exists)llmring lock chat - NOT for your app)Example:
from llmring import LLMRing
# Use explicit lockfile
async with LLMRing(lockfile_path="./my-llmring.lock") as service:
pass
# Or set via environment variable
# export LLMRING_LOCKFILE_PATH=/path/to/llmring.lock
# Or place llmring.lock in current directory (auto-detected)
Lockfiles use TOML format:
version = "1.0"
default_profile = "default"
[profiles.default]
name = "default"
[[profiles.default.bindings]]
alias = "summarizer"
models = ["openai:gpt-4o-mini"]
[[profiles.default.bindings]]
alias = "analyzer"
models = [
"anthropic:claude-sonnet-4-5-20250929", # Primary
"openai:gpt-4o", # Fallback
"google:gemini-2.5-pro" # Second fallback
]
[[profiles.default.bindings]]
alias = "code-reviewer"
models = ["anthropic:claude-sonnet-4-5-20250929"]
[profiles.dev]
name = "dev"
[[profiles.dev.bindings]]
alias = "assistant"
models = ["openai:gpt-4o-mini"] # Cheaper for development
[profiles.prod]
name = "prod"
[[profiles.prod.bindings]]
alias = "assistant"
models = ["anthropic:claude-sonnet-4-5-20250929"] # Higher quality for production
Create a new lockfile with registry-based defaults.
# Create in current directory
llmring lock init
# Overwrite existing
llmring lock init --force
# Create at specific path
llmring lock init --file path/to/llmring.lock
What it does:
Bind an alias to one or more models.
# Bind to single model
llmring bind fast "openai:gpt-4o-mini"
# Bind with fallbacks
llmring bind balanced "anthropic:claude-sonnet-4-5-20250929,openai:gpt-4o"
# Bind to specific profile
llmring bind assistant "openai:gpt-4o-mini" --profile dev
llmring bind assistant "anthropic:claude-opus-4" --profile prod
Format:
provider:model--profile name (defaults to "default")List all configured aliases.
# List aliases in default profile
llmring aliases
# List aliases in specific profile
llmring aliases --profile dev
# Show with details
llmring aliases --verbose
Output:
fast → openai:gpt-4o-mini
balanced → anthropic:claude-sonnet-4-5-20250929 (+ 2 fallbacks)
deep → anthropic:claude-opus-4
Conversational lockfile management with AI advisor.
# Start interactive chat for lockfile configuration
llmring lock chat
What it does:
Example session:
You: I need a fast, cheap model for development
Advisor: I recommend gpt-4o-mini - it's $0.15/$0.60 per million tokens...
You: Set that as my 'dev' alias
Advisor: Done! Added binding dev → openai:gpt-4o-mini
Validate lockfile structure and bindings.
# Validate lockfile
llmring lock validate
# Validate specific file
llmring lock validate --file path/to/llmring.lock
from llmring import LLMRing
# Use lockfile from current directory or bundled default
async with LLMRing() as service:
pass
# Use specific lockfile
async with LLMRing(lockfile_path="./custom.lock") as service:
pass
from llmring import LLMRing
async with LLMRing() as service:
# Resolve alias to concrete model reference
model_ref = service.resolve_alias("fast")
print(model_ref) # "openai:gpt-4o-mini"
# Resolve with profile
model_ref = service.resolve_alias("assistant", profile="dev")
print(model_ref) # Profile-specific binding
from llmring import LLMRing
async with LLMRing() as service:
# Bind alias to model
service.bind_alias("myalias", "openai:gpt-4o")
# Bind with profile
service.bind_alias("assistant", "openai:gpt-4o-mini", profile="dev")
from llmring import LLMRing
async with LLMRing() as service:
# Get all aliases for default profile
aliases = service.list_aliases()
for alias, model in aliases.items():
print(f"{alias} → {model}")
# Get aliases for specific profile
aliases = service.list_aliases(profile="dev")
from llmring import LLMRing
async with LLMRing() as service:
# Remove alias from default profile
service.unbind_alias("myalias")
# Remove alias from specific profile
service.unbind_alias("assistant", profile="dev")
from llmring import LLMRing
async with LLMRing() as service:
# Create new lockfile with defaults
service.init_lockfile()
# Overwrite existing lockfile
service.init_lockfile(force=True)
Aliases are cached for performance. Clear when updating lockfile:
from llmring import LLMRing
async with LLMRing() as service:
# Clear all cached aliases
service.clear_alias_cache()
# Now fresh lookups from lockfile
model = service.resolve_alias("fast")
Profiles let you use different models in different environments.
# llmring.lock
[profiles.dev]
name = "dev"
[[profiles.dev.bindings]]
alias = "assistant"
models = ["openai:gpt-4o-mini"] # Cheap
[profiles.staging]
name = "staging"
[[profiles.staging.bindings]]
alias = "assistant"
models = ["anthropic:claude-sonnet-4-5-20250929"] # Mid-tier
[profiles.prod]
name = "prod"
[[profiles.prod.bindings]]
alias = "assistant"
models = [
"anthropic:claude-opus-4", # Best quality
"anthropic:claude-sonnet-4-5-20250929" # Fallback
]
Via environment variable:
# Set profile for entire application
export LLMRING_PROFILE=dev
# Now all requests use 'dev' profile
python my_app.py
Via CLI:
# Use specific profile
llmring chat "Hello" --profile dev
# List aliases in profile
llmring aliases --profile prod
In code:
from llmring import LLMRing, LLMRequest, Message
async with LLMRing() as service:
request = LLMRequest(
model="assistant",
messages=[Message(role="user", content="Hello")]
)
# Use dev profile
response = await service.chat(request, profile="dev")
# Use prod profile
response = await service.chat(request, profile="prod")
profile="dev" or --profile dev (highest)LLMRING_PROFILE=devdefault profile (lowest)Aliases can specify multiple models for automatic failover.
Lockfile:
[[profiles.default.bindings]]
alias = "balanced"
models = [
"anthropic:claude-sonnet-4-5-20250929", # Try first
"openai:gpt-4o", # If first fails
"google:gemini-2.5-pro" # If both fail
]
What happens:
async with LLMRing() as service:
request = LLMRequest(
model="assistant", # Your semantic alias
messages=[Message(role="user", content="Hello")]
)
# Tries anthropic:claude-sonnet-4-5-20250929
# If rate limited or unavailable → tries openai:gpt-4o
# If that fails → tries google:gemini-2.5-pro
response = await service.chat(request)
Use cases:
To ship lockfiles with your Python package:
Add to pyproject.toml:
[tool.hatch.build]
include = [
"src/yourpackage/**/*.py",
"src/yourpackage/**/*.lock", # Include lockfiles
]
Or with setuptools, add to MANIFEST.in:
include src/yourpackage/*.lock
In your package:
mypackage/
├── src/
│ └── mypackage/
│ ├── __init__.py
│ └── llmring.lock # Ship with package
├── pyproject.toml
└── README.md
Users can then override:
from llmring import LLMRing
# Uses your package's bundled lockfile by default
async with LLMRing() as service:
pass
# Or override with their own
async with LLMRing(lockfile_path="./my-llmring.lock") as service:
pass
If building a library that uses LLMRing, follow this pattern:
Pattern:
llmring.locklockfile_path parameter for user override__init__Simple Library Example:
from pathlib import Path
from llmring import LLMRing
DEFAULT_LOCKFILE = Path(__file__).parent / "llmring.lock"
REQUIRED_ALIASES = ["summarizer"]
class MyLibrary:
def __init__(self, lockfile_path=None):
"""Initialize with optional custom lockfile.
Args:
lockfile_path: Optional path to custom lockfile.
If None, uses library's bundled lockfile.
Raises:
ValueError: If lockfile missing required aliases
"""
lockfile = lockfile_path or DEFAULT_LOCKFILE
self.ring = LLMRing(lockfile_path=lockfile)
# Validate required aliases (fail fast with clear error)
self.ring.require_aliases(REQUIRED_ALIASES, context="my-library")
def summarize(self, text: str) -> str:
return self.ring.chat("summarizer", messages=[...]).content
Validation Helpers:
from llmring import LLMRing
ring = LLMRing(lockfile_path="./my.lock")
# Check if alias exists (returns bool, never raises)
if ring.has_alias("summarizer"):
response = ring.chat("summarizer", messages=[...])
# Validate required aliases (raises ValueError if missing)
ring.require_aliases(
["summarizer", "analyzer"],
context="my-library" # Included in error message
)
# Error: "Lockfile missing required aliases for my-library: analyzer.
# Lockfile path: /path/to/lockfile.lock
# Please ensure your lockfile defines these aliases."
Library Composition:
When Library B uses Library A, pass lockfile to both:
class LibraryB:
def __init__(self, lockfile_path=None):
lockfile = lockfile_path or DEFAULT_LOCKFILE
# Pass lockfile to Library A (gives us control)
self.lib_a = LibraryA(lockfile_path=lockfile)
# Use same lockfile for our own LLMRing
self.ring = LLMRing(lockfile_path=lockfile)
self.ring.require_aliases(REQUIRED_ALIASES, context="library-b")
User Override:
from my_library import MyLibrary
# Use library defaults
lib = MyLibrary()
# Override with custom lockfile
lib = MyLibrary(lockfile_path="./my-models.lock")
Best Practices:
require_aliases() in __init__# Development: use cheap models
export LLMRING_PROFILE=dev
llmring bind assistant "openai:gpt-4o-mini" --profile dev
# Production: use best models
llmring bind assistant "anthropic:claude-opus-4" --profile prod
# Meaningful names instead of model IDs
llmring bind summarizer "openai:gpt-4o-mini"
llmring bind analyst "anthropic:claude-sonnet-4-5-20250929"
llmring bind coder "openai:gpt-4o"
Use in code:
# Clear intent from alias names
summarizer_request = LLMRequest(model="summarizer", ...)
analyst_request = LLMRequest(model="analyst", ...)
coder_request = LLMRequest(model="coder", ...)
[profiles.us-west]
[[profiles.us-west.bindings]]
alias = "assistant"
models = ["openai:gpt-4o"]
[profiles.eu-central]
[[profiles.eu-central.bindings]]
alias = "assistant"
models = ["anthropic:claude-sonnet-4-5-20250929"] # Better EU availability
# DON'T DO THIS - brittle, hard to change
request = LLMRequest(
model="openai:gpt-4o-mini",
messages=[...]
)
Right: Use Semantic Aliases
# DO THIS - flexible, easy to update
request = LLMRequest(
model="summarizer", # Semantic name defined in lockfile
messages=[...]
)
# DON'T DO THIS - single point of failure
[[profiles.default.bindings]]
alias = "assistant"
models = ["anthropic:claude-sonnet-4-5-20250929"]
Right: Include Fallbacks
# DO THIS - automatic failover
[[profiles.default.bindings]]
alias = "assistant"
models = [
"anthropic:claude-sonnet-4-5-20250929",
"openai:gpt-4o",
"google:gemini-2.5-pro"
]
# DON'T DO THIS - same models everywhere
if os.getenv("ENV") == "dev":
model = "openai:gpt-4o-mini"
else:
model = "anthropic:claude-opus-4"
request = LLMRequest(model=model, ...)
Right: Use Profiles
# DO THIS - let lockfile handle it
# export LLMRING_PROFILE=dev (or prod)
request = LLMRequest(model="assistant", ...)
# DON'T DO THIS - wrong format
llmring bind fast "gpt-4o-mini" # Missing provider!
Right: Provider:Model Format
# DO THIS - include provider
llmring bind fast "openai:gpt-4o-mini"
llmring lock chat for easy setupclear_alias_cache() after lockfile changesfrom llmring import LLMRing
from llmring.exceptions import ModelNotFoundError
async with LLMRing() as service:
try:
# Resolve alias
model_ref = service.resolve_alias("myalias")
except ModelNotFoundError:
print("Alias not found in lockfile")
try:
# Use alias in request
request = LLMRequest(model="myalias", messages=[...])
response = await service.chat(request)
except ModelNotFoundError:
print("Could not resolve alias to available model")
llmring-chat - Basic chat using aliasesllmring-streaming - Streaming with aliasesllmring-tools - Tools with aliased modelsllmring-structured - Structured output with aliasesllmring-providers - Direct provider access (bypassing aliases)Lockfiles provide:
Recommendation: Always use aliases instead of direct model references for flexibility and maintainability.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.
Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand colors or style guidelines, visual formatting, or company design standards apply.
Create beautiful visual art in .png and .pdf documents using design philosophy. You should use this skill when the user asks to create a poster, piece of art, design, or other static piece. Create original visual designs, never copying existing artists' work to avoid copyright violations.