From test-writing
Debates PHPUnit unit test review findings peer-to-peer in Agent Teams waves via SendMessage, refining stances through challenges, concessions, and justifications for final output.
npx claudepluginhub shopwarelabs/ai-coding-tools --plugin test-writingThis skill is limited to using the following tools:
Peer-to-peer debate of review findings within a single Agent Teams wave. You debate directly with co-reviewers via SendMessage, then produce your final stance.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Peer-to-peer debate of review findings within a single Agent Teams wave. You debate directly with co-reviewers via SendMessage, then produce your final stance.
Provided in spawn prompt by team-lead:
own_findings: your findings from Wave 0 (reviewing skill output)peer_findings: findings from co-reviewers on shared filesco_reviewers: list of co-reviewer names and shared filesLoad debate-rules.md.
For each file, compare own findings against peer findings:
For challenges: call mcp__plugin_test-writing_test-rules__get_rules(ids={rule_id}) to load the detection algorithm. Apply it against the code. If the peer is right, prepare concession instead.
For each co-reviewer, send ONE message via SendMessage(to: "{co_reviewer_name}") covering all shared files. Use the debate message format from output-format.md:
After sending, wait for co-reviewer responses.
If you received challenges from co-reviewers in round 1:
Send ONE response per co-reviewer. Then proceed to Phase 3.
If no challenges were received, or all challenges are conceded, skip round 2.
Produce final stance per file using the format from output-format.md:
This is your output. Return it to the lead.
If a co-reviewer does not respond to your round 1 message, produce final stance from your own analysis and whatever peer findings you received in the input. Do not block.
If mcp__plugin_test-writing_test-rules__get_rules is unavailable, you cannot verify detection algorithms. Concede peer findings you cannot verify and note the limitation in your final stance.