Run a local framework-aware code intelligence MCP server to index codebases across 81 languages and 60+ frameworks, enabling semantic searches with call graphs and dead code detection, safe refactoring workflows like rename/move/extract with impact analysis, bulk codemods for repeated edits, security/quality scans, and pre-commit/PR checks on git changes.
npx claudepluginhub nikolai-vysotskyi/trace-mcp --plugin trace-mcpUse trace-mcp apply_codemod for any bulk mechanical change instead of repeated Edit calls. Activate whenever the same edit pattern would be applied 2+ times, across one file or many.
Run trace-mcp security, quality-gate, and antipattern checks before committing or opening a PR. Activate when the agent is about to create a commit or pull request in a project indexed by trace-mcp.
Safe refactoring workflow using trace-mcp — assess risk, find candidates, check impact, and rename symbols across all files without missing import sites or cross-file references.
Use trace-mcp tools for code navigation, impact analysis, and framework-aware queries instead of Read/Grep/Glob/Bash. Activate whenever the agent needs to explore, understand, or modify a codebase that has trace-mcp indexed.
Share bugs, ideas, or general feedback.
Own this plugin?
Verify ownership to unlock analytics, metadata editing, and a verified badge.
Sign in to claimOwn this plugin?
Verify ownership to unlock analytics, metadata editing, and a verified badge.
Sign in to claimGraph-first code intelligence for AI agents. SurrealDB knowledge graph + 52 MCP tools replace Read/Grep/Glob with deterministic graph traversal. 80–95% fewer tokens on code context. Rust-native, fully local.
AST knowledge graph for intelligent code navigation — auto-indexes your codebase and provides semantic search, call graph traversal, HTTP route tracing, and impact analysis via MCP tools
Code intelligence powered by a knowledge graph. Provides execution flow tracing, blast radius analysis, and augmented search across your codebase.
Claude on Rails Tooling System
Codebase intelligence — semantic search workflows, dependency graph analysis, and context artifact exploration for SocratiCode
Codebase exploration, refactoring, and quality analysis
AI agents recompute the same work. trace-mcp makes them reuse instead.
The recomputation → reuse layer for AI systems.
40–50% fewer tokens on average · up to 2× effective capacity · up to 99% less redundant processing
Based on early benchmarks across agent workflows with repeated context and dependency traversal.
AI systems don't scale because they recompute instead of reuse. Every turn, the agent re-reads the same files, re-traverses the same dependencies, and re-inflates the context window with structure it already discovered. Token bills grow. Latency grows. Reasoning quality drops. The model isn't the bottleneck — the recomputation leak is.
trace-mcp builds a framework-aware graph of your codebase once, then serves it through MCP so the agent reasons from a precomputed structure instead of brute-reading the repo. Ask "what breaks if I change this model?" — instead of 80 Grep calls and 190 file reads, the agent calls
get_change_impactonce and gets the blast radius across PHP, Vue, migrations, and DI. One tool call replaces ~42 minutes of agent exploration. 81 framework integrations across 80 languages, 153 tools.The same engine indexes markdown vaults.
[[wikilinks]]become first-class edges, frontmatter and#tagsbecome metadata, headings become nested sections.find_usagesreturns backlinks.apply_renamerewrites every link to a renamed note. One MCP for code and knowledge — no second tool to plug in.
Also ships a desktop app with a GPU graph explorer over the same index.
AI is bottlenecked not by models, but by recomputation. Agents treat the context window like a database — they re-read the same files, re-traverse the same dependencies, and re-inflate context every turn with structure they already computed five steps ago. Token bills, latency, and hallucinations all grow with project size instead of with task complexity.
trace-mcp closes the recomputation leak. The graph is built once, kept incrementally fresh, and served to every agent that asks — so the same work isn't paid for over and over.