From test-quality
Generate tests for a specified module using project conventions. Use this skill when the user asks to "write tests for", "generate tests", "add test coverage", "test this module", or any request to create new test files following the project's existing patterns, fixtures, and assertion style. Optionally generates an eval suite if the eval-framework plugin is installed.
npx claudepluginhub ats-kinoshita-iso/agent-workshop --plugin test-qualityThis skill uses the workspace's default tool permissions.
Generate tests for the module or file specified by the user. Steps:
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Guides MCP server integration in Claude Code plugins via .mcp.json or plugin.json configs for stdio, SSE, HTTP types, enabling external services as tools.
Generate tests for the module or file specified by the user. Steps:
tests/.test-knowledge.json exists, apply its patterns and conventionspytest.mark.parametrize for variant testing where appropriateFollow the project's test conventions strictly. Place the test file in the correct directory following existing patterns.
Present the generated tests for review before writing them.
If the eval-framework plugin is installed, you may also generate an eval suite for agent skills or pipeline components (not just unit tests).
To generate an eval suite, ask: "Also generate an eval suite" or "include evals".
An eval suite differs from unit tests:
assert result == expected)score >= threshold)When generating an eval suite (evals/<module>_eval.py):
score_output() helper to evaluate model responsesEval suite template (requires eval-framework plugin):
# evals/<module>_eval.py
from eval_framework import EvalCase, score_output
EVAL_CASES = [
EvalCase(
name="<scenario name>",
input="<scenario input>",
criteria=["<criterion 1>", "<criterion 2>"],
threshold=0.8,
),
# ... more cases
]
Only generate eval suites for skills, agents, or pipeline components -- not for pure utility functions (use unit tests for those).