LangSmith tracing and debugging setup for LLM applications. Configure observability, capture traces, and enable debugging for LangChain/LangGraph agents.
Configures LangSmith observability and tracing for LangChain and LangGraph LLM applications.
npx claudepluginhub a5c-ai/babysitterThis skill is limited to using the following tools:
README.mdConfigure LangSmith observability and tracing for LLM applications built with LangChain and LangGraph frameworks.
LangSmith is the managed observability suite by LangChain that provides:
# Set required environment variables
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<project-name>
from langsmith import Client, traceable
from langchain.callbacks.tracers import LangChainTracer
# Initialize client
client = Client()
# Use @traceable decorator for custom functions
@traceable(name="custom_operation")
def my_function(input_data):
# Your logic here
return result
# Initialize tracer for LangChain
tracer = LangChainTracer(project_name="my-project")
# Use with LangChain chains
chain.invoke(input, config={"callbacks": [tracer]})
# Fetch traces from LangSmith
runs = client.list_runs(
project_name="my-project",
start_time=datetime.now() - timedelta(hours=24),
execution_order=1, # Root runs only
error=False, # Successful runs only
)
for run in runs:
print(f"Run ID: {run.id}")
print(f"Latency: {run.latency_p99}")
print(f"Tokens: {run.total_tokens}")
When used in a babysitter process, this skill produces:
const langsmithTracingTask = defineTask({
name: 'langsmith-tracing-setup',
description: 'Configure LangSmith tracing for the application',
inputs: {
projectName: { type: 'string', required: true },
apiKeyEnvVar: { type: 'string', default: 'LANGCHAIN_API_KEY' },
samplingRate: { type: 'number', default: 1.0 },
enableDebug: { type: 'boolean', default: false }
},
outputs: {
configured: { type: 'boolean' },
projectUrl: { type: 'string' },
artifacts: { type: 'array' }
},
async run(inputs, taskCtx) {
return {
kind: 'skill',
title: `Configure LangSmith tracing for ${inputs.projectName}`,
skill: {
name: 'langsmith-tracing',
context: {
projectName: inputs.projectName,
apiKeyEnvVar: inputs.apiKeyEnvVar,
samplingRate: inputs.samplingRate,
enableDebug: inputs.enableDebug,
instructions: [
'Verify LangSmith API credentials are available',
'Create or validate project configuration',
'Set up tracing instrumentation in codebase',
'Configure sampling rate and debug settings',
'Verify traces are being captured correctly'
]
}
},
io: {
inputJsonPath: `tasks/${taskCtx.effectId}/input.json`,
outputJsonPath: `tasks/${taskCtx.effectId}/result.json`
}
};
}
});
Activates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
Search, retrieve, and install Agent Skills from the prompts.chat registry using MCP tools. Use when the user asks to find skills, browse skill catalogs, install a skill for Claude, or extend Claude's capabilities with reusable AI agent components.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.