Use when implementing function calling, tool use, or agents with LLMs - unified tool API works across OpenAI, Anthropic, Google, and Ollama with consistent tool definition and execution patterns
Enable function calling across OpenAI, Anthropic, Google, and Ollama with a unified API. Use when you need LLMs to call external functions - define tools with JSON Schema, pass them in requests, and handle tool_calls in responses for multi-turn execution.
/plugin marketplace add juanre/llmring/plugin install llmring@juanre-ai-toolsThis skill inherits all available tools. When active, it can use any tool Claude has access to.
# With uv (recommended)
uv add llmring
# With pip
pip install llmring
Provider SDKs (install what you need):
uv add openai>=1.0 # OpenAI
uv add anthropic>=0.67 # Anthropic
uv add google-genai # Google Gemini
uv add ollama>=0.4 # Ollama (prompt-based tools)
This skill covers:
tools parameter in LLMRequesttool_choice parameter for controlling tool selectiontool_calls in LLMResponsefrom llmring import LLMRing, LLMRequest, Message
# Define tools
tools = [{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City name"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "Temperature unit"
}
},
"required": ["location"]
}
}
}]
async with LLMRing() as service:
request = LLMRequest(
model="tool-user", # Your alias for tool-using tasks
messages=[Message(role="user", content="What's the weather in NYC?")],
tools=tools
)
response = await service.chat(request)
# Check if model wants to call a tool
if response.tool_calls:
tool_call = response.tool_calls[0]
print(f"Tool: {tool_call['function']['name']}")
print(f"Args: {tool_call['function']['arguments']}")
Tools use JSON Schema format for function definitions.
Structure:
tool = {
"type": "function",
"function": {
"name": str, # Function name
"description": str, # What the function does
"parameters": { # JSON Schema for parameters
"type": "object",
"properties": {
"param_name": {
"type": str, # "string", "number", "boolean", "array", "object"
"description": str,
"enum": List[str] # Optional: allowed values
}
},
"required": List[str] # Required parameter names
}
}
}
Example:
get_weather_tool = {
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City name, e.g. 'New York' or 'London'"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "Temperature unit"
}
},
"required": ["location"]
}
}
}
Parameters:
tools (list, optional): List of available toolstool_choice (str/dict, optional): Control tool selection
"auto": Model decides whether to use tools (default)"none": Force no tool use"required": Force tool use (OpenAI) / "any" (Anthropic equivalent){"type": "function", "function": {"name": "tool_name"}}: Force specific toolNote: Provider differences - OpenAI uses "required", Anthropic uses "any" for the same behavior. LLMRing handles the translation automatically.
Example:
from llmring import LLMRequest, Message
# Let model decide
request = LLMRequest(
model="tool-user", # Your alias for tool-using tasks
messages=[Message(role="user", content="What's the weather?")],
tools=[get_weather_tool]
)
# Force tool use
request = LLMRequest(
model="tool-user", # Your alias for tool-using tasks
messages=[Message(role="user", content="Check NYC weather")],
tools=[get_weather_tool],
tool_choice="required" # Must use a tool
)
# Force specific tool
request = LLMRequest(
model="tool-user", # Your alias for tool-using tasks
messages=[Message(role="user", content="Get weather")],
tools=[get_weather_tool, search_tool],
tool_choice={
"type": "function",
"function": {"name": "get_weather"}
}
)
# Disable tools
request = LLMRequest(
model="tool-user", # Your alias for tool-using tasks
messages=[Message(role="user", content="Just chat")],
tools=[get_weather_tool],
tool_choice="none" # Don't use tools
)
When the model wants to call a tool, response.tool_calls is populated.
Structure:
tool_call = {
"id": str, # Unique tool call ID
"type": "function",
"function": {
"name": str, # Function name
"arguments": str # JSON string of arguments
}
}
Example:
response = await service.chat(request)
if response.tool_calls:
for tool_call in response.tool_calls:
tool_name = tool_call["function"]["name"]
tool_args = json.loads(tool_call["function"]["arguments"])
tool_id = tool_call["id"]
print(f"Tool: {tool_name}")
print(f"Arguments: {tool_args}")
After executing a tool, send the result back with role="tool".
Structure:
tool_result = Message(
role="tool",
content: str, # Tool execution result (JSON string)
tool_call_id: str # ID from the tool_call
)
Example:
import json
from llmring import Message
# Execute tool
tool_result = {"temperature": 72, "condition": "sunny"}
# Send result back
result_message = Message(
role="tool",
content=json.dumps(tool_result),
tool_call_id=tool_call["id"]
)
import json
from llmring import LLMRing, LLMRequest, Message
def get_weather(location: str, unit: str = "fahrenheit") -> dict:
"""Mock weather function."""
return {
"location": location,
"temperature": 72,
"unit": unit,
"condition": "sunny"
}
# Define tools
tools = [{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string", "description": "City name"},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "Temperature unit"
}
},
"required": ["location"]
}
}
}]
async with LLMRing() as service:
messages = [
Message(role="user", content="What's the weather in San Francisco?")
]
# First request: Model decides to use tool
request = LLMRequest(model="tool-user", # Your alias for tool-using tasks messages=messages, tools=tools)
response = await service.chat(request)
# Add assistant's tool call to history
messages.append(Message(
role="assistant",
content=response.content or "",
tool_calls=response.tool_calls
))
# Execute tools
if response.tool_calls:
for tool_call in response.tool_calls:
# Parse arguments
args = json.loads(tool_call["function"]["arguments"])
# Execute function
result = get_weather(**args)
# Add tool result to messages
messages.append(Message(
role="tool",
content=json.dumps(result),
tool_call_id=tool_call["id"]
))
# Second request: Model uses tool results to answer
request = LLMRequest(model="tool-user", # Your alias for tool-using tasks messages=messages, tools=tools)
response = await service.chat(request)
print(response.content)
# "The weather in San Francisco is 72°F and sunny."
from llmring import LLMRing, LLMRequest, Message
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string"}
},
"required": ["location"]
}
}
},
{
"type": "function",
"function": {
"name": "search_web",
"description": "Search the web for information",
"parameters": {
"type": "object",
"properties": {
"query": {"type": "string", "description": "Search query"}
},
"required": ["query"]
}
}
}
]
async with LLMRing() as service:
request = LLMRequest(
model="tool-user", # Your alias for tool-using tasks
messages=[Message(role="user", content="Weather in Paris and latest news")],
tools=tools
)
response = await service.chat(request)
# Model may call multiple tools
if response.tool_calls:
print(f"Model wants to call {len(response.tool_calls)} tools")
import json
from llmring import LLMRing, LLMRequest, Message
# Available functions
FUNCTIONS = {
"get_weather": lambda location: {"temp": 72, "condition": "sunny"},
"search_web": lambda query: {"results": ["Result 1", "Result 2"]}
}
async with LLMRing() as service:
messages = [
Message(role="user", content="What's the weather in NYC?")
]
tools = [...] # Your tool definitions
# Loop until model stops calling tools
while True:
request = LLMRequest(model="tool-user", # Your alias for tool-using tasks messages=messages, tools=tools)
response = await service.chat(request)
# Add assistant response
messages.append(Message(
role="assistant",
content=response.content or "",
tool_calls=response.tool_calls
))
# If no tool calls, we're done
if not response.tool_calls:
break
# Execute each tool call
for tool_call in response.tool_calls:
func_name = tool_call["function"]["name"]
args = json.loads(tool_call["function"]["arguments"])
# Execute function
result = FUNCTIONS[func_name](**args)
# Add result
messages.append(Message(
role="tool",
content=json.dumps(result),
tool_call_id=tool_call["id"]
))
# Final response
print(response.content)
Some models can call multiple tools in parallel:
import json
from llmring import LLMRing, LLMRequest, Message
async with LLMRing() as service:
request = LLMRequest(
model="tool-user", # Your alias for tool-using tasks
messages=[Message(role="user", content="Weather in NYC and Paris")],
tools=tools
)
response = await service.chat(request)
# Model may request multiple tool calls at once
if response.tool_calls:
for tool_call in response.tool_calls:
print(f"Tool: {tool_call['function']['name']}")
print(f"Args: {tool_call['function']['arguments']}")
# Execute all tools, then send all results back
from llmring import LLMRing, LLMRequest, Message
async with LLMRing() as service:
request = LLMRequest(
model="tool-user", # Your alias for tool-using tasks
messages=[Message(role="user", content="What's the weather?")],
tools=tools
)
tool_calls_accumulated = []
async for chunk in service.chat_stream(request):
print(chunk.delta, end="", flush=True)
# Accumulate tool calls (streamed incrementally)
if chunk.tool_calls:
tool_calls_accumulated = chunk.tool_calls
# After streaming, check for tool calls
if tool_calls_accumulated:
print("\nModel wants to call tools")
import json
from llmring import LLMRing, LLMRequest, Message
def execute_tool_safely(func_name: str, args: dict) -> dict:
"""Execute tool with error handling."""
try:
result = FUNCTIONS[func_name](**args)
return {"success": True, "data": result}
except Exception as e:
return {"success": False, "error": str(e)}
async with LLMRing() as service:
# ... after getting tool_calls ...
for tool_call in response.tool_calls:
func_name = tool_call["function"]["name"]
args = json.loads(tool_call["function"]["arguments"])
# Execute with error handling
result = execute_tool_safely(func_name, args)
# Send result (including errors)
messages.append(Message(
role="tool",
content=json.dumps(result),
tool_call_id=tool_call["id"]
))
| Provider | Tool Support | Notes |
|---|---|---|
| OpenAI | Native | Full support, parallel calls |
| Anthropic | Native | Full support |
| Native | Full support | |
| Ollama | Prompt-based | Tools via prompt engineering |
Note: Ollama uses prompt-based tool calling. LLMRing handles the adaptation automatically.
# DON'T DO THIS - skip assistant message
if response.tool_calls:
for tool_call in response.tool_calls:
result = execute_tool(tool_call)
messages.append(Message(role="tool", content=result))
# Missing assistant message!
Right: Include Assistant Message
# DO THIS - add assistant message with tool_calls
messages.append(Message(
role="assistant",
content=response.content or "",
tool_calls=response.tool_calls # Include this!
))
# Then add tool results
for tool_call in response.tool_calls:
result = execute_tool(tool_call)
messages.append(Message(
role="tool",
content=json.dumps(result),
tool_call_id=tool_call["id"]
))
# DON'T DO THIS - no tool_call_id
messages.append(Message(
role="tool",
content=json.dumps(result)
))
Right: Include tool_call_id
# DO THIS - include tool_call_id
messages.append(Message(
role="tool",
content=json.dumps(result),
tool_call_id=tool_call["id"] # Required!
))
# DON'T DO THIS - Python dict
result = {"temperature": 72}
messages.append(Message(
role="tool",
content=result, # Should be string!
tool_call_id=tool_id
))
Right: JSON String
# DO THIS - JSON string
result = {"temperature": 72}
messages.append(Message(
role="tool",
content=json.dumps(result), # Convert to string
tool_call_id=tool_id
))
# DON'T DO THIS - vague description
{
"name": "get_data",
"description": "Gets data",
"parameters": {...}
}
Right: Clear Descriptions
# DO THIS - specific, actionable description
{
"name": "get_weather",
"description": "Get the current weather conditions for a specific city, including temperature and general conditions",
"parameters": {...}
}
llmring-chat - Basic chat without toolsllmring-streaming - Streaming tool callsllmring-structured - Combine tools with structured outputllmring-lockfile - Configure models for tool usellmring-providers - Provider-specific tool featuresCreating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.
Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand colors or style guidelines, visual formatting, or company design standards apply.
Create beautiful visual art in .png and .pdf documents using design philosophy. You should use this skill when the user asks to create a poster, piece of art, design, or other static piece. Create original visual designs, never copying existing artists' work to avoid copyright violations.