ALWAYS invoke this skill before writing tests or when learning the testing approach.
From spec-treenpx claudepluginhub outcomeeng/claude --plugin spec-treeThis skill is limited to using the following tools:
references/methodology.mdSearches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
<quick_start>
PREREQUISITE: Read the methodology reference before writing any test:
${SKILL_DIR}/references/methodology.md — 5-stage router, 5 factors, 7 exceptions, test double taxonomyThen follow the spec-tree workflow below.
</quick_start>
<spec_tree_workflow>
<step name="load_context">Step 1: Load tree context
Check for <SPEC_TREE_FOUNDATION> and <SPEC_TREE_CONTEXT> markers. If absent, invoke /understanding and /contextualizing first.
This loads:
Step 2: Extract assertions from the spec
Parse the target spec node. Extract all typed assertions and their test links:
| Type | Pattern in spec | Test strategy |
|---|---|---|
| Scenario | Given ... when ... then ... ([test](...)) | Example-based |
| Mapping | {input} maps to {output} ([test](...)) | Parameterized |
| Conformance | {output} conforms to {standard} ([test](...)) | Tool validation |
| Property | {invariant} holds for all {domain} ([test](...)) | Property-based |
| Compliance | ALWAYS/NEVER: {rule} ([review]/[test](...)) | Review or test |
Record each assertion with:
Step 3: Analyze evidence gaps
For each assertion:
| Status | Condition | Action |
|---|---|---|
| Covered | Test link exists and resolves to a file | Verify in Step 4 |
| Missing link | No ([test](...)) in the assertion | Must add test link |
| Broken link | Link present but file doesn't exist | Must create test file |
| No assertions | Spec has no typed assertions | Spec needs work first — do not write tests |
Report the evidence gap summary before proceeding.
</step> <step name="route_methodology">Step 4: Route each assertion through the methodology
For each assertion that needs a test, apply the 5-stage router from ${SKILL_DIR}/references/methodology.md:
Document the routing decision for each assertion.
</step> <step name="generate_scaffolds">Step 5: Generate test scaffolds
For each assertion needing a new test:
tests/ directory.test_{slug}_unit.py, test_{slug}_integration.py, etc.Delegate language-specific patterns to /testing-python or /testing-typescript.
Specified nodes: If the implementation module doesn't exist yet, test files will fail on import. This is expected — the test is a declaration of what the implementation must satisfy. Add the node's path to spx/EXCLUDE and run the project's sync command so the quality gate excludes these tests. Remove the entry when implementation begins. See ${SKILL_DIR}/../understanding/references/excluded-nodes.md.
Step 6: Update spec assertion links
After creating test files, update the spec to add ([test](tests/{filename})) links for each new assertion-test pair. Every assertion must link to at least one test file.
Step 7: Report evidence summary
Report which assertions have tests, which don't, which are stale:
| # | Assertion | Type | Level | Test File | Status |
| - | --------- | -------- | ----- | --------- | ------- |
| 1 | {text} | Scenario | 1 | {file} | Covered |
| 2 | {text} | Property | 1 | — | Missing |
</step>
</spec_tree_workflow>
<cross_cutting_assertions>
When an assertion lives in an ancestor node (cross-cutting), determine where the test evidence should go:
tests/ directory.tests/ directory at a higher level.</cross_cutting_assertions>
<success_criteria>
Testing is complete when:
</success_criteria>