sentry-mcp
Sentry's MCP service is primarily designed for human-in-the-loop coding agents. Our tool selection and priorities are focused on developer workflows and debugging use cases, rather than providing a general-purpose MCP server for all Sentry functionality.
This remote MCP server acts as middleware to the upstream Sentry API, optimized for coding assistants like Cursor, Claude Code, and similar development tools. It's based on Cloudflare's work towards remote MCPs.
Getting Started
You'll find everything you need to know by visiting the deployed service in production:
https://mcp.sentry.dev
If you're looking to contribute, learn how it works, or to run this for self-hosted Sentry, continue below.
Claude Code Plugin
Install as a Claude Code plugin for automatic subagent delegation:
claude plugin marketplace add getsentry/sentry-mcp
claude plugin install sentry-mcp@sentry-mcp
This provides a sentry-mcp subagent that Claude automatically delegates to when you ask about Sentry errors, issues, traces, or performance.
For forward-looking tool variants and features:
claude plugin install sentry-mcp@sentry-mcp-experimental
Stdio vs Remote
While this repository is focused on acting as an MCP service, we also support a stdio transport. This is still a work in progress, but is the easiest way to adapt run the MCP against a self-hosted Sentry install.
Note: The AI-powered search tools (search_events, search_issues, etc.) require an LLM provider (OpenAI or Anthropic). These tools use natural language processing to translate queries into Sentry's query syntax. Without a configured provider, these specific tools will be unavailable, but all other tools will function normally.
To utilize the stdio transport, you'll need to create an User Auth Token in Sentry with the necessary scopes. As of writing this is:
org:read
project:read
project:write
team:read
team:write
event:write
Launch the transport:
npx @sentry/mcp-server@latest --access-token=sentry-user-token
Need to connect to a self-hosted deployment? Add --host (hostname
only, e.g. --host=sentry.example.com) when you run the command.
For isolated internal deployments that only expose plain HTTP, also add
--insecure-http.
Some features (like Seer) may not be available on self-hosted instances. You can
disable specific skills to prevent unsupported tools from being exposed:
npx @sentry/mcp-server@latest --access-token=TOKEN --host=sentry.example.com --disable-skills=seer
For self-hosted instances without TLS:
npx @sentry/mcp-server@latest --access-token=TOKEN --host=sentry.internal:9000 --insecure-http
Environment Variables
SENTRY_ACCESS_TOKEN= # Required: Your Sentry auth token
# LLM Provider Configuration (required for AI-powered search tools)
EMBEDDED_AGENT_PROVIDER= # Required: 'openai' or 'anthropic'
OPENAI_API_KEY= # Required if using OpenAI
ANTHROPIC_API_KEY= # Required if using Anthropic
# Optional overrides
SENTRY_HOST= # For self-hosted deployments
MCP_DISABLE_SKILLS= # Disable specific skills (comma-separated, e.g. 'seer')
Important: Always set EMBEDDED_AGENT_PROVIDER to explicitly specify your LLM provider. Auto-detection based on API keys alone is deprecated and will be removed in a future release. See docs/embedded-agents.md for detailed configuration options.
Example MCP Configuration
{
"mcpServers": {
"sentry": {
"command": "npx",
"args": ["@sentry/mcp-server"],
"env": {
"SENTRY_ACCESS_TOKEN": "your-token",
"EMBEDDED_AGENT_PROVIDER": "openai",
"OPENAI_API_KEY": "sk-..."
}
}
}
}
If you leave the host variable unset, the CLI automatically targets the Sentry
SaaS service. Only set the override when you operate self-hosted Sentry.
For self-hosted instances that don't support Seer:
{
"mcpServers": {
"sentry": {
"command": "npx",
"args": ["@sentry/mcp-server"],
"env": {
"SENTRY_ACCESS_TOKEN": "your-token",
"SENTRY_HOST": "sentry.example.com",
"MCP_DISABLE_SKILLS": "seer"
}
}
}
}
MCP Inspector
MCP includes an Inspector, to easily test the service:
pnpm inspector
Enter the MCP server URL (http://localhost:5173) and hit connect. This should trigger the authentication flow for you.
Note: If you have issues with your OAuth flow when accessing the inspector on 127.0.0.1, try using localhost instead by visiting http://localhost:6274.
Local Development
To contribute changes, you'll need to set up your local environment:
- Set up environment and agent skills: