Generates test cases for a module or function following FIRST principles and Arrange-Act-Assert pattern. Detects test framework from project config. Use to bootstrap test coverage for existing code or as the Red phase of TDD.
From agent-triforcenpx claudepluginhub artemiopadilla/agent-triforce --plugin agent-triforceThis skill uses the workspace's default tool permissions.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Guides writing and configuring Hookify rules with YAML frontmatter, regex patterns, and conditions to monitor bash commands, file edits, prompts, and stop events.
Generate tests for: $ARGUMENTS
If no specific module or function is provided, ask the user which file or function to generate tests for.
Follow these steps:
SIGN IN:
ANALYZE:
_ (single underscore)SELECT TECHNIQUES: 4. For each public function, select test design technique(s) based on the function's characteristics:
| Signal in the code | Technique | What to generate |
|---|---|---|
| Numeric/date parameters, ranges, limits, thresholds | Boundary Value Analysis (BVA) | Tests at min, min+1, max-1, max, and one invalid boundary |
| Discrete valid categories (enums, roles, types, status) | Equivalence Partitioning (EP) | One test per valid partition + one per invalid partition |
| Multiple boolean conditions, complex if/elif, permission matrices | Decision Table | One test per unique condition combination |
| Lifecycle objects (status fields, workflow steps, FSMs) | State Transition | Each valid transition + key invalid transitions |
| Known failure patterns, historical bugs, unusual inputs | Error Guessing | Nulls, empty strings, Unicode, concurrent access, off-by-one |
# Technique: BVA — testing boundaries of page_size parameter (1, 100, 0, 101)
DETECT FRAMEWORK: 5. Determine the test framework from project configuration:
pyproject.toml ([tool.pytest]), pytest.ini, setup.cfg -> use pytestpackage.json for vitest -> use Vitest; check for jest -> use JestGENERATE: 7. Determine the test file location following project conventions:
tests/ mirroring src/ structure (e.g., src/auth/token.py -> tests/auth/test_token.py)tests/ mirroring src/ or co-located *.test.ts files (match existing pattern)TC-{feature}-{NNN} and links to the AC it verifies:
def test_page_size_at_maximum():
"""TC-pagination-003: BVA max boundary for page_size.
Verifies: pagination-AC-001
GIVEN page_size is 100 (maximum allowed)
WHEN the list endpoint is called
THEN exactly 100 results are returned."""
def test_function_does_something():
# Arrange
input_data = create_valid_input()
# Act
result = function_under_test(input_data)
# Assert
assert result == expected_outcome
# Generated tests -- require human review before merge
# Generator: /generate-tests
# Source: {path to source file}
# Date: {YYYY-MM-DD}
# TODO: verify expected value when the correct output cannot be inferred from the specFRAMEWORK IDIOMS:
13. Use framework-specific idioms:
- pytest: Use fixtures, parametrize for multiple cases, pytest.raises for exceptions
- Vitest/Jest: Use describe/it blocks, beforeEach/afterEach, expect().toThrow()
- Match the project's existing test style if tests already exist
TIME OUT -- Generated Tests Review (DO-CONFIRM):
SIGN OUT:
14. Report what was generated:
- Test file path
- Number of tests generated per function
- Functions skipped (private) with count
- Branches that need manual test authoring (if any)
- Command to run the generated tests (e.g., pytest tests/auth/test_token.py -v)
15. Run the SIGN OUT checklist from your agent file