This skill should be used when the user is writing, reviewing, or improving tests. Triggers on phrases like write tests, add tests, test naming, what should I test, edge cases, test structure, or assertion. Also triggers when the user shares a test and asks for review, asks about assertion best practices, or reports finding a bug and wants to know what else to test. Do NOT trigger for test runner configuration or CI/CD pipeline setup.
From hugin-coworknpx claudepluginhub michelve/hugin-marketplace --plugin hugin-coworkThis skill uses the workspace's default tool permissions.
evals/evals.jsonevals/trigger-evals.jsonresources/assertion-examples.tsresources/test-structure-examples.tsskills-rules.jsonSearches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Guides agent creation for Claude Code plugins with file templates, frontmatter specs (name, description, model), triggering examples, system prompts, and best practices.
How to write tests that catch bugs, document behavior, and remain maintainable.
🚨 Test names describe outcomes, not actions. "returns empty array when input is null" not "test null input". The name IS the specification.
🚨 Assertions must match test titles. If the test claims to verify "different IDs", assert on the actual ID values—not just count or existence.
🚨 Assert specific values, not types. expect(result).toEqual(['First.', ' Second.']) not expect(result).toBeDefined(). Specific assertions catch specific bugs.
🚨 One concept per test. Each test verifies one behavior. If you need "and" in your test name, split it.
🚨 Bugs cluster together. When you find one bug, test related scenarios. The same misunderstanding often causes multiple failures.
Pattern: [outcome] when [condition]
returns empty array when input is null
throws ValidationError when email format invalid
calculates tax correctly for tax-exempt items
preserves original order when duplicates removed
test null input // What about null input?
should work // What does "work" mean?
handles edge cases // Which edge cases?
email validation test // What's being validated?
Your test name should read like a specification. If someone reads ONLY the test names, they should understand the complete behavior of the system.
See resources/assertion-examples.ts for complete examples covering:
toBeDefined) vs strong (toEqual) assertionsSee resources/test-structure-examples.ts for complete examples covering:
When testing a function, systematically consider these edge cases based on input types.
""" "[], {}null inputundefined inputundefined vs missing keyThese test implicit assumptions in your domain:
When testing code that validates properties against type constraints (e.g., validating route: string in an interface):
Wrong-type literals:
route = 123)route = true)count = 'five')enabled = 'yes')Non-literal expressions:
route = `/path/${id}`)route = someVariable)route = getRoute())route = config.path)Correct type:
route = '/orders')'', zero 0, false)Why this matters: A common bug pattern is validating "is this a literal?" without checking "is this the RIGHT TYPE of literal?"
hasLiteralValue() returns true for 123, true, and 'string'hasStringLiteralValue() returns true only for 'string'When an interface specifies property: string, validation must reject numeric and boolean literals, not just non-literal expressions.
When you discover a bug, don't stop—explore related scenarios:
If your test name says "test" or "should work": STOP. What outcome are you actually verifying? Name it specifically.
If you're asserting toBeDefined() or toBeTruthy(): STOP. What value do you actually expect? Assert that instead.
If your assertion doesn't match your test title: STOP. Either fix the assertion or rename the test. They must agree.
If you're testing multiple concepts in one test: STOP. Split it. Future you debugging a failure will thank you.
If you found a bug and wrote one test: STOP. Bugs cluster. What related scenarios might have the same problem?
If you're skipping edge cases because "that won't happen": STOP. It will happen. In production. At 3 AM.
With TDD Process: This skill guides the RED phase—how to write the failing test well.
With Software Design Principles: Testable code follows design principles. Hard-to-test code often has design problems.