From odh-ai-helpers
Analyzes PyTorch internals across Python, C++, CUDA layers via TorchTalk MCP server. Traces operators, call graphs, dispatch, impact of changes, and locates tests.
npx claudepluginhub opendatahub-io/ai-helpers --plugin odh-ai-helpersThis skill is limited to using the following tools:
This skill enables cross-language analysis of PyTorch internals by leveraging the TorchTalk MCP server. It traces binding chains from Python through C++ to CUDA, analyzes dispatch mechanisms, maps call graphs, and locates test infrastructure.
Builds and trains PyTorch neural networks including models, training loops, data pipelines, torch.compile optimization, distributed training, and deployment.
Provides LeetCode-style PyTorch interview practice with auto-grading, gradient verification, and timing for 40 problems implementing softmax, LayerNorm, attention, GPT-2 from scratch.
Provides PyTorch 2.6–2.11 updates: torch.load weights_only=True default, FSDP2 fully_shard, torch.compile mega cache/hierarchical/control flow, varlen_attn, FlexAttention FA4, TorchScript deprecated. Load before PyTorch code.
Share bugs, ideas, or general feedback.
This skill enables cross-language analysis of PyTorch internals by leveraging the TorchTalk MCP server. It traces binding chains from Python through C++ to CUDA, analyzes dispatch mechanisms, maps call graphs, and locates test infrastructure.
/torchtalk:setup if TorchTalk is not yet installedVerify availability:
mcp__torchtalk__get_status
If the status tool returns data, all tools below are ready.
Before using any tools, confirm the TorchTalk server is running:
mcp__torchtalk__get_status
Check that:
If the server is not available, direct the user to run /torchtalk:setup.
Match the user's question to the appropriate tool:
| Question Pattern | Tool | Example |
|---|---|---|
| "How does X work?" / "Trace X" | mcp__torchtalk__trace | trace("softmax", "full") |
| "Find functions matching X" | mcp__torchtalk__search | search("conv", "CUDA") |
| "Where are the CUDA kernels for X?" | mcp__torchtalk__cuda_kernels | cuda_kernels("softmax") |
| "What does X call?" | mcp__torchtalk__calls | calls("at::native::add") |
| "What calls X?" | mcp__torchtalk__called_by | called_by("at::native::add") |
| "What breaks if I change X?" | mcp__torchtalk__impact | impact("at::native::add", 3) |
| "How does nn.Linear work?" | mcp__torchtalk__trace_module | trace_module("Linear") |
| "List all nn modules" | mcp__torchtalk__list_modules | list_modules("nn") |
| "Find tests for X" | mcp__torchtalk__find_similar_tests | find_similar_tests("softmax") |
| "What test utilities exist?" | mcp__torchtalk__list_test_utils | list_test_utils("all") |
| "What tests are in file X?" | mcp__torchtalk__test_file_info | test_file_info("test_torch") |
For simple lookups, a single tool call suffices. For deeper questions, combine multiple tools:
"How does torch.softmax work end-to-end?"
mcp__torchtalk__trace("softmax", "full") - Get the binding chainmcp__torchtalk__cuda_kernels("softmax") - Find GPU kernelsmcp__torchtalk__calls("at::native::softmax") - See internal dependencies"What breaks if I modify at::native::add?"
mcp__torchtalk__impact("at::native::add", 3) - Transitive callersmcp__torchtalk__find_similar_tests("add") - Affected tests"How does nn.Linear connect to native code?"
mcp__torchtalk__trace_module("Linear") - Module definitionmcp__torchtalk__trace("linear", "full") - Native operator chainFormat results with:
file:line references for every implementation location| Tool | Parameters | Description |
|---|---|---|
mcp__torchtalk__trace | function_name, focus? | Trace Python to C++ binding chain. Focus: "full", "yaml", "dispatch" |
mcp__torchtalk__search | query, backend?, limit? | Find bindings by name with optional backend filter |
mcp__torchtalk__cuda_kernels | function_name? | Find GPU kernel launches with file:line |
| Tool | Parameters | Description |
|---|---|---|
mcp__torchtalk__impact | function_name, depth? | Transitive callers + Python entry points (depth 1-5) |
mcp__torchtalk__calls | function_name | Functions this function invokes (outbound) |
mcp__torchtalk__called_by | function_name | Functions that invoke this (inbound) |
| Tool | Parameters | Description |
|---|---|---|
mcp__torchtalk__trace_module | module_name | Trace torch.nn.Linear, torch.optim.Adam, etc. |
mcp__torchtalk__list_modules | category? | List modules: "nn" (default), "optim", "all", or search query |
| Tool | Parameters | Description |
|---|---|---|
mcp__torchtalk__find_similar_tests | query, limit? | Find tests for an operator or concept |
mcp__torchtalk__list_test_utils | category? | List test utilities: "all" (default), "fixtures", "assertions", "decorators" |
mcp__torchtalk__test_file_info | file_path | Details about a specific test file |
/torchtalk:setuppython setup.py develop) to generate compile_commands.jsonmcp__torchtalk__search with partial names, or check spellingtorchtalk status