From digital-innovation-agents
Creates and manages unit and integration tests by analyzing the codebase, auto-detecting test frameworks, and generating tests that follow project conventions.
npx claudepluginhub pssah4/digital-innovation-agents --plugin digital-innovation-agentsThis skill uses the workspace's default tool permissions.
Creates tests that fit seamlessly into the existing codebase. Detects the
Generates unit, integration, component, and e2e test suites with mocking strategies, edge case coverage, descriptive naming, and CI integration patterns. Activates on 'write tests', 'unit tests', 'mocking' requests.
Provides test design patterns, coverage strategies (80-100% targets), types (unit/integration/E2E), organization, and best practices for comprehensive test suites. Use for new suites, coverage improvement, or test design.
Share bugs, ideas, or general feedback.
Creates tests that fit seamlessly into the existing codebase. Detects the framework, patterns, and conventions automatically from the project.
Testing writes tests for a specific backlog item. Run the
team-workflow check (full rules:
skills/project-conventions/references/team-workflow.md):
Identify the active item from the prompt or via AskUserQuestion. Tests usually accompany a FEAT, FIX, or IMP that just got coded; continue on the same item branch.
Verify the branch matches feature/<item-id-lower>-<slug>. On a
wrong branch, AskUserQuestion to switch.
Skill-triggered GitHub integration (idempotent):
python3 tools/github-integration/flow.py create-issue --item <ID>
python3 tools/github-integration/flow.py open-draft-pr --item <ID>
At Handoff Ritual end, tag the phase:
python3 tools/github-integration/flow.py tag-phase --item <ID> --phase test
Write .git/dia-active-skill so subsequent invocations stay silent.
New tests count as a doc/code change and must be bound to an existing artifact. Before the first test is written, one of these IDs must be in scope:
Exception: read-only test analysis (reading the coverage report, identifying gaps, reading existing tests) does not need triage.
If the assignment cannot be derived from the user prompt, the skill asks one short question before the first new test (in the user's working language; the English wording below is a template):
"Does this test run belong to a FEATURE, an IMP, or a FIX? Please name the ID."
Backlog row and triage details:
skills/project-conventions/references/graph-invariants.md,
section "Artifact triage at entry point".
/testing shares the verify gate with /coding. No completion claim
without fresh verification evidence in the current message.
Hard threshold for "all green":
_devprocess/rules/technical.md or in the Coverage section below)Forbidden phrases without fresh verification:
The skill executes the test command IN THIS MESSAGE before any completion claim. Cached output, stale logs, and "I ran it last session" are not evidence.
Before writing a single test, analyze the project:
1. Detect test framework:
- package.json -> jest/vitest/mocha? (scripts.test, devDependencies)
- pyproject.toml -> pytest? (tool.pytest)
- Cargo.toml -> Rust built-in?
- Existing test files -> which pattern?
2. Detect existing test structure:
- Where are tests? (tests/, __tests__/, src/**/*.test.ts, *.spec.ts?)
- Naming convention? (.test.ts, .spec.ts, _test.py?)
- Is there conftest.py / jest.config.ts / vitest.config.ts?
- Are there test utilities, fixtures, factories?
3. Adopt existing patterns:
- How are mocks created? (jest.mock, vi.mock, unittest.mock?)
- How is async handled?
- Which assertions are used?
- Are there shared test helpers?
4. What is NOT tested? (identify gaps)
Always follow existing patterns. Don't introduce new test frameworks or patterns unless the project has none yet.
/\
/E2E\ Few, slow, expensive
/------\
/ Integr. \ Moderate count
/------------\
/ Unit Tests \ Many, fast, cheap
/________________\
Focus of this skill: Integration Tests (primary) and Unit Tests (either as TDD fallback or gap-filling -- see next section). E2E tests are a separate topic.
When /coding runs in TDD mode (see coding/SKILL.md Phase 3b), unit
tests for new modules already exist when this skill runs. In that case,
/testing focuses on three things, in this priority:
Tests that exercise multiple modules together:
Even after TDD, gaps can remain:
/testing scans the TDD-generated test code and suggests missing cases.
A coverage report against the targets (85% line / 80% branch / 90% function). Gaps are listed but not auto-filled -- the user decides whether trivial code actually needs testing.
/coding ran WITHOUT TDD mode (fallback)/testing takes over unit test creation as well (its historical role).
In fallback mode, /testing analyzes the new modules and creates unit
tests following the AAA pattern and FIRST principles, just like the
unit-test sections below.
Every test follows the AAA pattern:
// Example (TypeScript/Jest -- adapt to project framework)
describe('ToolRegistry', () => {
describe('registerTool', () => {
it('should register a tool and make it retrievable by name', () => {
// Arrange
const registry = new ToolRegistry();
const tool = createMockTool({ name: 'read-file' });
// Act
registry.registerTool(tool);
// Assert
expect(registry.getTool('read-file')).toBe(tool);
});
it('should throw when registering duplicate tool names', () => {
// Arrange
const registry = new ToolRegistry();
const tool = createMockTool({ name: 'read-file' });
registry.registerTool(tool);
// Act & Assert
expect(() => registry.registerTool(tool))
.toThrow(/already registered/);
});
});
});
Read references/test-checklist.md for the complete checklist.
Short version:
Follow the existing project pattern. If none exists:
{module}.test.ts or {module}.spec.ts{module}.integration.test.tstests//testing {file or module}
1. Analyze the file and its dependencies
2. Identify testable functions/methods
3. Recognize existing test patterns in the project
4. Create tests (AAA pattern, FIRST principles)
5. Run tests and verify
6. Check coverage of new tests
/testing
1. Read the feature spec (FEATURE-*.md) for Success Criteria
2. Identify all new/changed files
3. Create integration tests for module interactions
4. Fill unit-test gaps if any
5. Verify Success Criteria from the feature spec
| Metric | Target | Minimum |
|---|---|---|
| Line Coverage | 85% | 70% |
| Branch Coverage | 80% | 65% |
| Function Coverage | 90% | 75% |
These are guidelines. Project-specific targets in CLAUDE.md or feature
specs take precedence.
Read references/test-anti-patterns.md for details.
Short version:
expect(1+1).toBe(2) helps no oneBefore writing tests, ALWAYS:
When tests fail, a fix-loop starts. The user decides how to proceed.
=== Test Result ===
Passed: {N} tests
Failed: {N} tests
Coverage: {line}% / {branch}% / {function}%
Failed tests:
- {test name}: {short error description}
Cause: code bug / wrong test expectation / missing implementation
Fix effort: S/M/L
File: {src/path/file.ts} or {tests/path/test.ts}
Coverage gaps:
- {src/path/file.ts}: {function} not tested
How should I proceed?
A) Fix all findings automatically
-> I fix everything, retest, repeat until all tests are green
B) Approve fixes one by one
-> I show each fix before implementation
C) Only adjust tests (the code is correct, the tests are wrong)
D) Abort -- I want to look at findings manually first
For each fix:
After all fixes: run the full test suite again.
=== Re-Test Result ===
Before: {N} failed
After: {N} failed
{If still failures: back to step 1}
{If all green:}
All tests passed! Coverage: {line}% / {branch}% / {function}%
The loop repeats until all tests are green or the user aborts.
After a successful test run, follow the backlog-first writeback order:
src/ARCHITECTURE.map and
write the JSDoc header./coding rules) if code
fixes were needed during the test run./consistency-check mode A at the end of the skill phase.
Catches orphan tests (no FEATURE/FIX/IMP backlog row), missing
coverage entries, dashboard count mismatches, and dead links
before the handoff. The Handoff Ritual reports the result./testing always runs this ritual at the end, regardless of how it was
started (directly or via /dia-guide).
Produced / updated:
- tests/{paths}: {new or updated test files}
- Coverage report: {line}% / {branch}% / {function}%
- Fix-loop status: {N iterations, N fixes applied}
- _devprocess/requirements/features/FEATURE-*.md: {test-status updates}
- _devprocess/context/BACKLOG.md: {new coverage items added per BACKLOG-TEMPLATE.md, dashboard refreshed}
Append a new entry to _devprocess/context/HANDOFFS.md with:
Run the phase-end commit per skills/project-conventions/references/team-workflow.md
section "Phase-end commit (binding)". The block fires the binding
branch-and-item check, stages every artefact this phase produced
(test files, coverage configuration, FEATURE spec test-status
updates, BACKLOG row updates), commits with the canonical message,
sets the phase tag, and opens a draft PR if one does not exist yet.
Canonical commit message for TESTING:
test: <ITEM-ID> testing complete
<one-line summary: N tests added, coverage L%/B%/F%>
Refs: <ITEM-ID>
After the commit lands, run:
python3 tools/github-integration/flow.py tag-phase --item <ID> --phase test
Skip the commit silently if the working tree has no changes.
Ask the user:
"Tests are complete and all green. Coverage: {line}% / {branch}% / {function}%. Recommended next:
/security-audit.Shall I start
/security-auditnow, or would you like to review first?"
On agreement ("yes" / "go" / "next") or when running inside
/dia-guide:
-> Start /security-audit and pass the handoff context
On rejection ("no" / "stop" / "I want to check first"): -> Pause and wait for user instruction
Tests, unit tests, integration tests, test coverage, testing, TDD, coverage gaps, test pyramid, fix-loop, re-test, regression, handoff