From umbraco-mcp-skills
Add or update an LLM eval test for a specific tool or collection. Creates a new eval scenario or updates an existing one to cover new tools. Use when adding eval coverage for new or modified tools.
npx claudepluginhub umbraco/umbraco-mcp-base --plugin umbraco-mcp-skillsThis skill uses the workspace's default tool permissions.
Add or update an LLM eval test for a specific tool or collection. This skill creates a new eval scenario or updates an existing one to cover new tools.
Applies Acme Corporation brand guidelines including colors, fonts, layouts, and messaging to generated PowerPoint, Excel, and PDF documents.
Builds DCF models with sensitivity analysis, Monte Carlo simulations, and scenario planning for investment valuation and risk assessment.
Calculates profitability (ROE, margins), liquidity (current ratio), leverage, efficiency, and valuation (P/E, EV/EBITDA) ratios from financial statements in CSV, JSON, text, or Excel for investment analysis.
Add or update an LLM eval test for a specific tool or collection. This skill creates a new eval scenario or updates an existing one to cover new tools.
Use this skill when:
/add-tool and want eval coverage/build-evals skipped the collection because eval files already existBefore running, ensure:
src/umbraco-api/tools/{collection}/index.ts)npm run buildANTHROPIC_API_KEY)/add-eval form)/add-eval form "copy form workflow")If no hint is provided, compare the tools in the collection against existing eval test coverage and suggest what's missing.
| Agent | When to use |
|---|---|
eval-test-creator | Creating or updating eval test files (Step 3 or 4) |
BUILD BEFORE RUNNING. Eval tests run against dist/index.js. Always npm run build first.
RUN COMMANDS SEPARATELY. Always run build and test as separate Bash calls. Never chain with &&.
ITERATE ON PROMPTS. Eval tests are probabilistic. If a test fails, the fix is usually in the prompt.
VERBOSE DURING DEVELOPMENT. Always set verbose: true when creating or debugging.
Check if the eval setup exists:
tests/evals/helpers/e2e-setup.tstests/evals/jest.config.tsIf the setup doesn't exist, tell the user to run /build-evals {collection} first to create the infrastructure. This skill does not create eval setup files.
Read the collection's tools and existing eval tests:
src/umbraco-api/tools/{collection}/index.ts — all tools in the collectiontests/evals/{collection}-*.test.ts — existing eval scenariosBuild an inventory:
tools and requiredTools arrays)Based on the inventory and the user's request:
Update an existing scenario when:
copy-form to a CRUD scenario)Create a new scenario when:
If updating an existing eval test:
COLLECTION_TOOLS arrayrequiredTools if it must be calledBuild and run:
npm run build
npm run test:evals -- --testPathPattern="{collection}"
If the test fails, iterate on the prompt. Common fixes:
Use the eval-test-creator agent to create tests/evals/{collection}-{workflow}.test.ts. The agent owns the test template and patterns.
Build and run:
npm run build
npm run test:evals -- --testPathPattern="{collection}-{workflow}"
Iterate on the prompt until the test passes reliably.
Report what was done: