From accelerator
Interactively stress-tests implementation plans by grilling users on decisions, edge cases, assumptions to uncover issues, inconsistencies, and gaps before coding. Uses sub-agents for codebase context.
npx claudepluginhub atomicinnovation/accelerator --plugin acceleratorThis skill is limited to using the following tools:
!`${CLAUDE_PLUGIN_ROOT}/scripts/config-read-context.sh`
Adversarially stress-tests technical plans by verifying claims against docs, running POC code in .poc-stress-test/, and updating before building.
Generates structured implementation plans from feature descriptions or requirements, grounded in repo patterns and research. Deepens existing plans via interactive sub-agent review.
Share bugs, ideas, or general feedback.
!${CLAUDE_PLUGIN_ROOT}/scripts/config-read-context.sh
!${CLAUDE_PLUGIN_ROOT}/scripts/config-read-skill-context.sh stress-test-plan
!${CLAUDE_PLUGIN_ROOT}/scripts/config-read-agents.sh
If no "Agent Names" section appears above, use these defaults: accelerator:reviewer, accelerator:codebase-locator, accelerator:codebase-analyser, accelerator:codebase-pattern-finder, accelerator:documents-locator, accelerator:documents-analyser, accelerator:web-search-researcher.
Plans directory: !${CLAUDE_PLUGIN_ROOT}/scripts/config-read-path.sh plans meta/plans
You are tasked with stress-testing an implementation plan by interviewing the user relentlessly about every aspect of it. Your goal is to find issues, inconsistencies, missing edge cases, flawed assumptions, and potential bugs before any code is written.
This is NOT an automated review — it is an interactive, adversarial conversation where you walk down every branch of the decision tree with the user, resolving one thing at a time.
When this command is invoked:
I'll stress-test your implementation plan. Please provide the path to the plan file.
Example: `/stress-test-plan {plans directory}/2025-01-08-ENG-1478-feature.md`
Then wait for the user's input.
Work through these areas as the conversation naturally leads to them. Do not treat this as a checklist to run through mechanically.
Stop stress-testing when:
Do NOT stop just because the user seems tired of questions. If there are genuine issues remaining, flag them:
I know this is thorough, but I want to flag that [specific issue] could cause
problems during implementation. Can we quickly resolve it?
As you identify issues during the conversation, track them. Once the stress- testing is complete:
Here's what we found during the stress test:
**Issues to fix:**
- [Issue]: [What needs to change in the plan]
- [Issue]: [What needs to change in the plan]
**Decisions confirmed:**
- [Decision]: [User confirmed this is intentional]
**Risks accepted:**
- [Risk]: [User acknowledges this and accepts it]
Would you like me to update the plan with these changes?
I've updated the plan at `[path]`. Changes made:
- [Change 1] — addressing [issue]
- [Change 2] — addressing [issue]
- [Noted as accepted risk] — [risk description]
This is a conversation, not a report: The value is in the back-and-forth. Don't just list problems — dig into each one with the user until it's resolved.
Don't redesign the plan: Your job is to find problems, not to propose a different architecture. If you think the approach is fundamentally wrong, raise it as a concern and let the user decide.
Verify against reality: Use sub-agents freely to check whether the plan's assumptions about the codebase are correct. The most valuable findings come from discovering that the code doesn't work the way the plan assumes.
Depth over breadth: It's better to thoroughly stress-test the riskiest parts of the plan than to superficially cover everything.
Track what you've covered: Use TodoWrite to track which areas of the plan you've stress-tested and which remain. This helps you know when you're genuinely done.
Respect confirmed decisions: If the user has explained their reasoning and confirmed a decision, don't circle back to it. Move on.
Edit conservatively: When updating the plan, make the minimum changes needed. Don't restructure or rewrite beyond what the stress test revealed.
This skill sits in the plan lifecycle between review and implementation:
/create-plan — Create the implementation plan interactively/review-plan — Automated multi-lens quality review/stress-test-plan — Interactive adversarial stress-testing (this command)/implement-plan — Execute the approved plan/validate-plan — Verify implementation matches the plan/review-plan and /stress-test-plan are complementary:
/review-plan gives broad, automated coverage through multiple quality lenses
— good for catching structural issues, security gaps, and standards violations/stress-test-plan goes deep through interactive conversation — good for
finding logical inconsistencies, missing edge cases, flawed assumptions, and
gaps that only surface when you trace through scenarios step by step!${CLAUDE_PLUGIN_ROOT}/scripts/config-read-skill-instructions.sh stress-test-plan