npx claudepluginhub raddue/crucibleThis skill uses the workspace's default tool permissions.
Read completed implementation and write up to 5 tests designed to make it break. Targets edge cases, boundary conditions, and failure modes the implementer didn't anticipate.
Writes adversarial tests that stress failure paths for hardening error handling, stress-testing assumptions, validating boundaries, and hunting silent failures.
Hunts cross-component bugs in full feature implementations by dispatching 5 parallel adversarial test dimensions against complete diffs before quality gates.
Uncovers edge cases, test coverage gaps, and bugs via phased analysis of code and tests. Guides gap identification, iterative test writing, and bug documentation.
Share bugs, ideas, or general feedback.
Read completed implementation and write up to 5 tests designed to make it break. Targets edge cases, boundary conditions, and failure modes the implementer didn't anticipate.
Announce at start: "I'm using the adversarial-tester skill to find weaknesses in this implementation."
Skill type: Rigid -- follow exactly, no shortcuts.
Model: Opus (adversarial reasoning about failure modes requires creative analytical thinking)
All subagent dispatches use disk-mediated dispatch. See shared/dispatch-convention.md for the full protocol.
| Agent | Question | Output | Scope |
|---|---|---|---|
| Red-team | "What's wrong with this artifact?" | Written findings (Fatal/Significant/Minor) | Attacks designs, plans, code quality |
| Test Gap Writer | "What known gaps need filling?" | Executable tests (expected to PASS) | Fills reviewer-identified holes |
| Adversarial Tester | "What runtime behavior will break?" | Executable tests (may PASS or FAIL) | Finds unknown weaknesses in behavior |
Read the full diff of the implementation changes. Identify:
Brainstorm 8-10 ways the implementation could break at runtime. Think like an attacker:
Rank each candidate by:
Select the top 5. If fewer than 5 candidates are meaningful, write fewer -- don't pad with trivial tests.
For each selected failure mode, write one focused test that:
Run each test and record the result:
Output the ADVERSARIAL TEST REPORT (see Report Format below).
## ADVERSARIAL TEST REPORT
### Summary
- Failure modes identified: N
- Tests written: N
- Tests PASSING (implementation robust): N
- Tests FAILING (weaknesses found): N
- Tests ERROR (discarded): N
### Failure Mode 1: [Title]
- **Attack vector:** [how this breaks]
- **Likelihood:** High/Medium/Low
- **Impact:** High/Medium/Low
- **Test:** `TestClassName.TestMethodName`
- **Result:** PASS/FAIL
- **If FAIL -- fix guidance:** [what the implementer should change]
[repeat for each failure mode]
Must NOT do:
When used standalone, after running the tests:
When used within the build pipeline, the orchestrator handles outcome routing (see build skill Phase 3).
The orchestrator (not this skill) decides whether to skip. When used standalone, use your judgment:
When dispatched by the build pipeline:
Fix loop mechanics:
Orchestrator skip conditions:
.md, .json, .yaml, .uss, .uxml)This skill produces adversarial tests. When used standalone, the tests themselves are the quality mechanism -- no additional quality gate needed. When used within the build pipeline, the orchestrator handles outcome routing.
crucible:build (Phase 3, after test gap writer)crucible:test-driven-development patterns for test writingcrucible:code-review (lightweight review if fix touches 3+ files)break-it-prompt.md (for subagent dispatch)