From umbraco-mcp-skills
Advisory skill for reflecting on and improving an MCP server — trace analysis, chained tool design, and behavioral coverage review.
npx claudepluginhub umbraco/umbraco-mcp-base --plugin umbraco-mcp-skillsThis skill is limited to using the following tools:
Advisory skill for reflecting on and improving an MCP server. Unlike builder skills (`/build-tools`, `/build-evals`) that execute generation workflows, this skill is **conversational** — it helps developers understand what to improve, why, and how.
Applies Acme Corporation brand guidelines including colors, fonts, layouts, and messaging to generated PowerPoint, Excel, and PDF documents.
Builds DCF models with sensitivity analysis, Monte Carlo simulations, and scenario planning for investment valuation and risk assessment.
Calculates profitability (ROE, margins), liquidity (current ratio), leverage, efficiency, and valuation (P/E, EV/EBITDA) ratios from financial statements in CSV, JSON, text, or Excel for investment analysis.
Advisory skill for reflecting on and improving an MCP server. Unlike builder skills (/build-tools, /build-evals) that execute generation workflows, this skill is conversational — it helps developers understand what to improve, why, and how.
Use this skill when the developer asks:
Route to the appropriate sub-file based on what the developer wants to discuss:
The developer wants to analyze eval performance and iteratively improve tools based on trace data.
Read: trace-optimization.md
Signals: mentions evals, traces, cost, turns, tokens, performance, "why does the LLM struggle", repeated tool calls, wrong tool selection.
The developer wants to discuss adding tools that delegate to other MCP servers — whether to proxy or create composites.
Read: chained-tools.md
Signals: mentions chaining, proxy, composite, multi-server, delegation, orchestration, combining data from multiple sources.
The developer wants a holistic review of their MCP: coverage gaps, organizational quality, prioritization of next steps.
Read: behavioral-analysis.md
Signals: mentions coverage, gaps, what's missing, next steps, priorities, tool organization, modes, slices, "are my tools well-designed".
When advising on MCP improvements, follow these principles:
Ask before recommending. Understand the developer's goals before suggesting changes. "What workflows are you trying to support?" is more useful than jumping to solutions.
Propose with trade-offs, not prescriptions. Present options with pros and cons. "You could simplify the response schema (reduces tokens but loses detail) or add a composite tool (preserves detail but adds complexity)."
Use their own codebase as evidence. Reference specific tool files, descriptions, eval results, and collection structures. Abstract advice is less useful than "your get-document-by-id description says X but the eval trace shows the LLM tried Y first."
Enter plan mode for concrete changes. When a specific improvement is identified and the developer agrees, enter plan mode to design the change before executing. This ensures the developer approves the approach.
Summarize action items. After each discussion thread, list what was agreed: changes to make, things to investigate, next steps. Keep it concrete.
Iterate, don't overhaul. Prefer small, measurable improvements over large rewrites. One description fix that drops turns from 5 to 3 is more valuable than a speculative restructuring.
Respect ignored endpoints. Read docs/analysis/IGNORED_ENDPOINTS.md before suggesting new tools. Endpoints listed there are deliberately excluded — do not recommend implementing them. They represent settled decisions about scope.
If the developer's question spans multiple paths (e.g., "my evals are slow and I think I need composite tools"), read both sub-files and synthesize. Start with the most pressing concern — usually trace optimization reveals whether chaining is actually needed.