From accelerator
Interactively stress-test a work item by grilling the user on scope, assumptions, acceptance criteria, edge cases, and dependencies to surface issues, gaps, and flawed assumptions before implementation is planned.
npx claudepluginhub atomicinnovation/accelerator --plugin accelerator[work item number or path]This skill is limited to using the following tools:
!`${CLAUDE_PLUGIN_ROOT}/scripts/config-read-context.sh`
Review a work item through multiple quality lenses and collaboratively iterate based on findings. Use when the user wants to evaluate a work item before implementation or escalation.
Generates requirements-quality checklists validating completeness, clarity, consistency, and measurability of specs, plans, tasks. For pre-implementation quality gates, audits, or domain-specific (UX, API, security, performance).
Generates custom checklists to validate feature requirements for clarity, completeness, consistency, and coverage before implementation.
Share bugs, ideas, or general feedback.
!${CLAUDE_PLUGIN_ROOT}/scripts/config-read-context.sh
!${CLAUDE_PLUGIN_ROOT}/scripts/config-read-skill-context.sh stress-test-work-item
!${CLAUDE_PLUGIN_ROOT}/scripts/config-read-agents.sh
If no "Agent Names" section appears above, use these defaults: accelerator:reviewer, accelerator:codebase-locator, accelerator:codebase-analyser, accelerator:codebase-pattern-finder, accelerator:documents-locator, accelerator:documents-analyser, accelerator:web-search-researcher.
Work items directory: !${CLAUDE_PLUGIN_ROOT}/scripts/config-read-path.sh work meta/work
You are tasked with stress-testing a work item by interviewing the user relentlessly about every aspect of it. Your goal is to find issues, inconsistencies, missing edge cases, flawed assumptions, and vague acceptance criteria before implementation is planned.
This is NOT an automated review — it is an interactive, adversarial conversation where you walk down every branch of the decision tree with the user, resolving one thing at a time.
When this command is invoked:
If a work item path or number was provided:
meta/work/0042-user-auth.md) or a bare
work item number (e.g. 0042 or 42, resolved against {work_dir})parent field, read the parent work item tooIf no work item path or number provided, respond with:
I'll stress-test your work item. Please provide the path or work item number.
Example: `/stress-test-work-item {work_dir}/0042-user-auth.md`
Or by number: `/stress-test-work-item 42`
Run `/list-work-items` to see available work items.
Then wait for the user's input.
One question at a time (or a small, tightly related cluster)
Walk the decision tree depth-first
Self-answer from the codebase when possible
Be adversarial but constructive
Work through these areas as the conversation naturally leads to them. Do not treat this as a checklist to run through mechanically.
Stop stress-testing when:
Do NOT stop just because the user seems tired of questions. If there are genuine issues remaining, flag them explicitly before wrapping up.
As you identify issues during the conversation, track them. Once the stress-testing is complete:
Here's what we found during the stress test:
**Issues to fix:**
- [Issue]: [What needs to change in the work item]
**Decisions confirmed:**
- [Decision]: [User confirmed this is intentional]
**Risks accepted:**
- [Risk]: [User acknowledges this and accepts it]
Would you like me to update the work item with these changes?
If the user agrees, edit the work item:
work_item_id, title, date,
author, type, status, priority, parent, tags) nor the body
**Type**:, **Status**:, **Priority**:, or **Author**: labels —
those transitions are /update-work-item's concernAfter editing, summarise changes made
This is a conversation, not a report: The value is in the back-and-forth. Don't just list problems — dig into each one with the user until it's resolved.
Don't redesign the work item: Your job is to find problems, not to propose a different architecture. If you think the approach is fundamentally wrong, raise it as a concern and let the user decide. Do not rewrite Requirements to reflect an alternative approach.
Verify against reality: Use codebase agents to check whether the work item's technical assumptions are correct. The most valuable findings come from discovering that the code doesn't work the way the work item assumes.
Depth over breadth: It's better to thoroughly stress-test the riskiest parts of the work item than to superficially cover everything.
Respect confirmed decisions: If the user has explained their reasoning and confirmed a decision, don't circle back to it. Move on.
Edit conservatively: When updating the work item, make the minimum changes needed to address what was agreed.
This skill sits in the work item lifecycle between review and planning:
/create-work-item or /extract-work-items — create the work item/refine-work-item — decompose and enrich/review-work-item — automated multi-lens quality review/stress-test-work-item — interactive adversarial examination (this command)/create-plan — plan implementation from an approved work item/review-work-item and /stress-test-work-item are complementary:
/review-work-item gives broad, automated coverage through multiple quality lenses
— good for catching structural issues and standards violations/stress-test-work-item goes deep through interactive conversation — good for
finding logical inconsistencies, missing edge cases, flawed assumptions, and
gaps that only surface when you trace through scenarios step by step!${CLAUDE_PLUGIN_ROOT}/scripts/config-read-skill-instructions.sh stress-test-work-item