Use when extracting structured data from LLMs, parsing JSON responses, or enforcing output schemas - unified JSON schema API works across OpenAI, Anthropic, Google, and Ollama with automatic validation and parsing
Extract structured data from LLMs using JSON schemas with automatic validation and parsing. Use when you need to enforce output formats, parse responses into Python objects, or extract specific fields from text.
/plugin marketplace add juanre/llmring/plugin install llmring@juanre-ai-toolsThis skill inherits all available tools. When active, it can use any tool Claude has access to.
# With uv (recommended)
uv add llmring
# With pip
pip install llmring
Provider SDKs (install what you need):
uv add openai>=1.0 # OpenAI
uv add anthropic>=0.67 # Anthropic
uv add google-genai # Google Gemini
uv add ollama>=0.4 # Ollama
This skill covers:
response_format parameter in LLMRequeststrict mode for validationparsed field in LLMResponsefrom llmring import LLMRing, LLMRequest, Message
async with LLMRing() as service:
# Define JSON schema
request = LLMRequest(
model="extractor", # Your alias for structured extraction
messages=[Message(role="user", content="Generate a person")],
response_format={
"type": "json_schema",
"json_schema": {
"name": "person",
"schema": {
"type": "object",
"properties": {
"name": {"type": "string"},
"age": {"type": "integer"},
"email": {"type": "string"}
},
"required": ["name", "age"]
}
},
"strict": True
}
)
response = await service.chat(request)
print("JSON string:", response.content)
print("Parsed data:", response.parsed) # Python dict
The response_format parameter controls structured output.
Structure:
response_format = {
"type": "json_schema",
"json_schema": {
"name": str, # Schema name
"schema": dict # JSON Schema definition
},
"strict": bool # Optional: enforce strict validation (at response_format level)
}
Parameters:
type (str, required): Must be "json_schema"json_schema (dict, required): Schema definition
name (str, required): Name for the schemaschema (dict, required): JSON Schema defining the structurestrict (bool, optional): If true, strictly enforce schemaExample:
from llmring import LLMRequest, Message
request = LLMRequest(
model="extractor", # Your alias for structured extraction
messages=[Message(role="user", content="Generate data")],
response_format={
"type": "json_schema",
"json_schema": {
"name": "response",
"schema": {
"type": "object",
"properties": {
"answer": {"type": "string"}
},
"required": ["answer"]
}
},
"strict": True
}
)
JSON Schema defines the expected structure.
Basic Types:
# String
{"type": "string"}
# Number (integer or float)
{"type": "number"}
# Integer only
{"type": "integer"}
# Boolean
{"type": "boolean"}
# Array
{
"type": "array",
"items": {"type": "string"} # Array of strings
}
# Object
{
"type": "object",
"properties": {
"field1": {"type": "string"},
"field2": {"type": "integer"}
},
"required": ["field1"] # Required fields
}
Example Schemas:
# Person schema
person_schema = {
"type": "object",
"properties": {
"name": {"type": "string"},
"age": {"type": "integer"},
"email": {"type": "string"},
"is_active": {"type": "boolean"}
},
"required": ["name", "age"]
}
# List of items schema
list_schema = {
"type": "object",
"properties": {
"items": {
"type": "array",
"items": {
"type": "object",
"properties": {
"id": {"type": "integer"},
"title": {"type": "string"}
}
}
}
}
}
# Nested object schema
nested_schema = {
"type": "object",
"properties": {
"user": {
"type": "object",
"properties": {
"name": {"type": "string"},
"address": {
"type": "object",
"properties": {
"street": {"type": "string"},
"city": {"type": "string"}
}
}
}
}
}
}
When using response_format, the response contains both raw JSON and parsed data.
Attributes:
content (str): Raw JSON stringparsed (dict): Parsed Python dictionary (ready to use)model (str): Model usedusage (dict): Token usagefinish_reason (str): Completion reasonExample:
response = await service.chat(request)
# Both available:
json_string = response.content # '{"name": "Alice", "age": 30}'
data = response.parsed # {"name": "Alice", "age": 30}
# Use parsed data directly
print(f"Name: {data['name']}")
print(f"Age: {data['age']}")
from llmring import LLMRing, LLMRequest, Message
async with LLMRing() as service:
# Extract contact info from text
request = LLMRequest(
model="extractor", # Your alias for structured extraction
messages=[Message(
role="user",
content="Extract contact info: John Smith, age 35, email john@example.com"
)],
response_format={
"type": "json_schema",
"json_schema": {
"name": "contact",
"schema": {
"type": "object",
"properties": {
"name": {"type": "string"},
"age": {"type": "integer"},
"email": {"type": "string"}
},
"required": ["name"]
}
},
"strict": True
}
)
response = await service.chat(request)
contact = response.parsed
print(f"Name: {contact['name']}, Age: {contact['age']}")
from llmring import LLMRing, LLMRequest, Message
async with LLMRing() as service:
request = LLMRequest(
model="extractor", # Your alias for structured extraction
messages=[Message(
role="user",
content="List 5 programming languages with their release years"
)],
response_format={
"type": "json_schema",
"json_schema": {
"name": "languages",
"schema": {
"type": "object",
"properties": {
"languages": {
"type": "array",
"items": {
"type": "object",
"properties": {
"name": {"type": "string"},
"year": {"type": "integer"}
},
"required": ["name", "year"]
}
}
},
"required": ["languages"]
}
}
}
)
response = await service.chat(request)
for lang in response.parsed["languages"]:
print(f"{lang['name']}: {lang['year']}")
from llmring import LLMRing, LLMRequest, Message
async with LLMRing() as service:
request = LLMRequest(
model="extractor", # Your alias for structured extraction
messages=[Message(
role="user",
content="Classify sentiment: This product is amazing!"
)],
response_format={
"type": "json_schema",
"json_schema": {
"name": "sentiment",
"schema": {
"type": "object",
"properties": {
"sentiment": {
"type": "string",
"enum": ["positive", "negative", "neutral"]
},
"confidence": {
"type": "number",
"minimum": 0.0,
"maximum": 1.0
}
},
"required": ["sentiment", "confidence"]
}
},
"strict": True
}
)
response = await service.chat(request)
result = response.parsed
print(f"Sentiment: {result['sentiment']} ({result['confidence']:.2f})")
from llmring import LLMRing, LLMRequest, Message
async with LLMRing() as service:
request = LLMRequest(
model="extractor", # Your alias for structured extraction
messages=[Message(
role="user",
content="Generate a blog post with title, author info, and tags"
)],
response_format={
"type": "json_schema",
"json_schema": {
"name": "blog_post",
"schema": {
"type": "object",
"properties": {
"title": {"type": "string"},
"author": {
"type": "object",
"properties": {
"name": {"type": "string"},
"email": {"type": "string"}
},
"required": ["name"]
},
"tags": {
"type": "array",
"items": {"type": "string"}
},
"content": {"type": "string"}
},
"required": ["title", "author", "content"]
}
}
}
)
response = await service.chat(request)
post = response.parsed
print(f"Title: {post['title']}")
print(f"By: {post['author']['name']}")
print(f"Tags: {', '.join(post['tags'])}")
from llmring import LLMRing, LLMRequest, Message
async with LLMRing() as service:
# Enforce specific values
request = LLMRequest(
model="extractor", # Your alias for structured extraction
messages=[Message(
role="user",
content="What's the priority of this bug?"
)],
response_format={
"type": "json_schema",
"json_schema": {
"name": "bug_priority",
"schema": {
"type": "object",
"properties": {
"priority": {
"type": "string",
"enum": ["low", "medium", "high", "critical"]
},
"reasoning": {"type": "string"}
},
"required": ["priority"]
}
},
"strict": True
}
)
response = await service.chat(request)
# priority is guaranteed to be one of the enum values
from llmring import LLMRing, LLMRequest, Message
async with LLMRing() as service:
request = LLMRequest(
model="extractor", # Your alias for structured extraction
messages=[Message(role="user", content="Generate JSON data")],
response_format={
"type": "json_schema",
"json_schema": {
"name": "data",
"schema": {
"type": "object",
"properties": {
"result": {"type": "string"}
}
}
}
}
)
# Stream JSON construction
full_json = ""
async for chunk in service.chat_stream(request):
print(chunk.delta, end="", flush=True)
full_json += chunk.delta
# Parse final JSON
import json
data = json.loads(full_json)
print(f"\nParsed: {data}")
When strict: True, the schema is strictly enforced:
response_format = {
"type": "json_schema",
"json_schema": {
"name": "data",
"schema": {...}
},
"strict": True # Strict validation at response_format level
}
Strict mode guarantees:
Without strict mode:
| Provider | JSON Schema | Strict Mode | Notes |
|---|---|---|---|
| OpenAI | Yes | Yes | Native support |
| Anthropic | Yes | Yes | Adapted automatically |
| Yes | Yes | Adapted automatically | |
| Ollama | Yes | Best-effort | Prompt-based adaptation |
LLMRing automatically adapts JSON schema to each provider's format.
# DON'T DO THIS - manually parse JSON
import json
response = await service.chat(request)
data = json.loads(response.content) # Unnecessary
Right: Use Parsed Field
# DO THIS - use pre-parsed data
response = await service.chat(request)
data = response.parsed # Already a dict
# DON'T DO THIS - forgot to mark required fields
schema = {
"type": "object",
"properties": {
"name": {"type": "string"},
"age": {"type": "integer"}
}
# No "required" field!
}
Right: Specify Required Fields
# DO THIS - mark required fields
schema = {
"type": "object",
"properties": {
"name": {"type": "string"},
"age": {"type": "integer"}
},
"required": ["name", "age"] # Both required
}
# DON'T DO THIS - invalid type
schema = {
"type": "object",
"properties": {
"value": {"type": "int"} # Wrong! Should be "integer"
}
}
Right: Use Correct JSON Schema Types
# DO THIS - correct types
schema = {
"type": "object",
"properties": {
"text": {"type": "string"},
"count": {"type": "integer"}, # Not "int"
"price": {"type": "number"}, # For floats
"active": {"type": "boolean"} # Not "bool"
}
}
# DON'T DO THIS - assume parsing always works
response = await service.chat(request)
name = response.parsed["name"] # May fail!
Right: Handle Missing Fields
# DO THIS - handle missing fields
response = await service.chat(request)
name = response.parsed.get("name", "Unknown")
if "age" in response.parsed:
age = response.parsed["age"]
You can use structured output and tools together:
from llmring import LLMRing, LLMRequest, Message
async with LLMRing() as service:
request = LLMRequest(
model="extractor", # Your alias for structured extraction
messages=[Message(role="user", content="Analyze this data")],
tools=[...], # Define tools
response_format={ # Also request structured output
"type": "json_schema",
"json_schema": {
"name": "analysis",
"schema": {
"type": "object",
"properties": {
"summary": {"type": "string"},
"score": {"type": "number"}
}
}
}
}
)
response = await service.chat(request)
# May have tool_calls OR parsed JSON
if response.tool_calls:
# Handle tool execution
pass
elif response.parsed:
# Handle structured output
print(response.parsed)
.get() with defaultsllmring-chat - Basic chat without structured outputllmring-streaming - Stream structured JSON constructionllmring-tools - Combine with function callingllmring-lockfile - Configure models for structured outputllmring-providers - Provider-specific schema featuresApplies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand colors or style guidelines, visual formatting, or company design standards apply.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.
Create beautiful visual art in .png and .pdf documents using design philosophy. You should use this skill when the user asks to create a poster, piece of art, design, or other static piece. Create original visual designs, never copying existing artists' work to avoid copyright violations.