From getsentry-sentry-agent-skills-1
Setup Sentry AI Agent Monitoring in any project. Use when asked to monitor LLM calls, track AI agents, or instrument OpenAI/Anthropic/Vercel AI/LangChain/Google GenAI/Pydantic AI. Detects installed AI SDKs and configures appropriate integrations.
npx claudepluginhub joshuarweaver/cascade-code-testing-misc --plugin getsentry-sentry-agent-skills-1This skill uses the workspace's default tool permissions.
Configure Sentry to track LLM calls, agent executions, tool usage, and token consumption.
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Configure Sentry to track LLM calls, agent executions, tool usage, and token consumption.
Important: The SDK versions, API names, and code samples below are examples. Always verify against docs.sentry.io before implementing, as APIs and minimum versions may have changed.
AI monitoring requires tracing enabled (tracesSampleRate > 0).
Prompt and output recording captures user content that is likely PII. Before enabling recordInputs/recordOutputs (JS) or include_prompts/send_default_pii (Python), confirm:
Ask the user whether they want prompt/output capture enabled. Do not enable it by default — configure it only when explicitly requested or confirmed. Use tracesSampleRate: 1.0 only in development; in production, use a lower value or a tracesSampler function.
Always detect installed AI SDKs before configuring:
# JavaScript
grep -E '"(openai|@anthropic-ai/sdk|ai|@langchain|@google/genai)"' package.json
# Python
grep -E '(openai|anthropic|langchain|huggingface)' requirements.txt pyproject.toml 2>/dev/null
| Package | Integration | Min Sentry SDK | Auto? |
|---|---|---|---|
openai | openAIIntegration() | 10.28.0 | Yes |
@anthropic-ai/sdk | anthropicAIIntegration() | 10.28.0 | Yes |
ai (Vercel) | vercelAIIntegration() | 10.6.0 | Yes* |
@langchain/* | langChainIntegration() | 10.28.0 | Yes |
@langchain/langgraph | langGraphIntegration() | 10.28.0 | Yes |
@google/genai | googleGenAIIntegration() | 10.28.0 | Yes |
*Vercel AI: 10.6.0+ for Node.js, Cloudflare Workers, Vercel Edge Functions, Bun. 10.12.0+ for Deno. Requires experimental_telemetry per-call.
Integrations auto-enable when the AI package is installed — no explicit registration needed:
| Package | Auto? | Notes |
|---|---|---|
openai | Yes | Includes OpenAI Agents SDK |
anthropic | Yes | |
langchain / langgraph | Yes | |
huggingface_hub | Yes | |
google-genai | Yes | |
pydantic-ai | Yes | |
litellm | No | Requires explicit integration |
mcp (Model Context Protocol) | Yes |
Just ensure tracing is enabled. Integrations auto-enable when the AI package is installed:
Sentry.init({
dsn: "YOUR_DSN",
tracesSampleRate: 1.0, // Lower in production (e.g., 0.1)
// OpenAI, Anthropic, Google GenAI, LangChain integrations auto-enable in Node.js
});
To customize (e.g., enable prompt capture — see Data Capture Warning):
integrations: [
Sentry.openAIIntegration({
// recordInputs: true, // Opt-in: captures prompt content (PII)
// recordOutputs: true, // Opt-in: captures response content (PII)
}),
],
In browser-side code or Next.js meta-framework apps, auto-instrumentation is not available. Wrap the client manually:
import OpenAI from "openai";
import * as Sentry from "@sentry/nextjs"; // or @sentry/react, @sentry/browser
const openai = Sentry.instrumentOpenAiClient(new OpenAI());
// Use 'openai' client as normal
integrations: [
Sentry.langChainIntegration({
// recordInputs: true, // Opt-in: captures prompt content (PII)
// recordOutputs: true, // Opt-in: captures response content (PII)
}),
Sentry.langGraphIntegration({
// recordInputs: true,
// recordOutputs: true,
}),
],
Add to sentry.edge.config.ts for Edge runtime:
integrations: [Sentry.vercelAIIntegration()],
Enable telemetry per-call:
await generateText({
model: openai("gpt-4o"),
prompt: "Hello",
experimental_telemetry: {
isEnabled: true,
// recordInputs: true, // Opt-in: captures prompt content (PII)
// recordOutputs: true, // Opt-in: captures response content (PII)
},
});
Integrations auto-enable — just init with tracing. Only add explicit imports to customize options:
import sentry_sdk
sentry_sdk.init(
dsn="YOUR_DSN",
traces_sample_rate=1.0, # Lower in production (e.g., 0.1)
# send_default_pii=True, # Opt-in: required for prompt capture (sends user PII)
# Integrations auto-enable when the AI package is installed.
# Only specify explicitly to customize (e.g., include_prompts):
# integrations=[OpenAIIntegration(include_prompts=True)],
)
Use when no supported SDK is detected.
op Value | Purpose |
|---|---|
gen_ai.request | Individual LLM calls |
gen_ai.invoke_agent | Agent execution lifecycle |
gen_ai.execute_tool | Tool/function calls |
gen_ai.handoff | Agent-to-agent transitions |
await Sentry.startSpan({
op: "gen_ai.request",
name: "LLM request gpt-4o",
attributes: { "gen_ai.request.model": "gpt-4o" },
}, async (span) => {
span.setAttribute("gen_ai.request.messages", JSON.stringify(messages));
const result = await llmClient.complete(prompt);
span.setAttribute("gen_ai.usage.input_tokens", result.inputTokens);
span.setAttribute("gen_ai.usage.output_tokens", result.outputTokens);
return result;
});
| Attribute | Description |
|---|---|
gen_ai.request.model | Model identifier |
gen_ai.request.messages | JSON input messages |
gen_ai.usage.input_tokens | Input token count |
gen_ai.usage.output_tokens | Output token count |
gen_ai.agent.name | Agent identifier |
gen_ai.tool.name | Tool identifier |
Enable prompt/output capture only after confirming with the user (see Data Capture Warning above).
After configuring, make an LLM call and check the Sentry Traces dashboard. AI spans appear with gen_ai.* operations showing model, token counts, and latency.
| Issue | Solution |
|---|---|
| AI spans not appearing | Verify tracesSampleRate > 0, check SDK version |
| Token counts missing | Some providers don't return tokens for streaming |
| Prompts not captured | Enable recordInputs/include_prompts |
| Vercel AI not working | Add experimental_telemetry to each call |