From accelerator
Review a work item through multiple quality lenses and collaboratively iterate based on findings. Use when the user wants to evaluate a work item before implementation or escalation.
npx claudepluginhub atomicinnovation/accelerator --plugin accelerator[path to work item file]This skill is limited to using the following tools:
!`${CLAUDE_PLUGIN_ROOT}/scripts/config-read-context.sh`
Interactively create a well-formed work item. Use when capturing a feature, bug, task, spike, or epic as a structured work item in meta/work/.
Verifies code implementations match specs, PRDs, epics, or tasks by checking completeness, acceptance criteria, edge cases, and scope creep. Use post- or during-implementation.
Generates well-structured work items with titles, descriptions, acceptance criteria, journals, and notes using govctl. Useful for creating tasks, adding criteria, or WI/task/ticket mentions.
Share bugs, ideas, or general feedback.
!${CLAUDE_PLUGIN_ROOT}/scripts/config-read-context.sh
!${CLAUDE_PLUGIN_ROOT}/scripts/config-read-skill-context.sh review-work-item
!${CLAUDE_PLUGIN_ROOT}/scripts/config-read-agents.sh
If no "Agent Names" section appears above, use these defaults: accelerator:reviewer, accelerator:codebase-locator, accelerator:codebase-analyser, accelerator:codebase-pattern-finder, accelerator:documents-locator, accelerator:documents-analyser, accelerator:web-search-researcher.
!${CLAUDE_PLUGIN_ROOT}/scripts/config-read-review.sh work-item
Work items directory: !${CLAUDE_PLUGIN_ROOT}/scripts/config-read-path.sh work meta/work
Work item reviews directory: !${CLAUDE_PLUGIN_ROOT}/scripts/config-read-path.sh review_work meta/reviews/work
You are tasked with reviewing a work item through quality lenses and then collaboratively iterating the work item based on findings.
When this command is invoked:
Check if a work item path or ID was provided: invoke the resolver:
${CLAUDE_PLUGIN_ROOT}/skills/work/scripts/work-item-resolve-id.sh <argument>
The resolver respects work.id_pattern and accepts paths, full IDs
(PROJ-0042), and bare numbers.
/list-work-items./list-work-items.If no work item path or number provided, respond with:
I'll help you review a work item. Please provide:
1. The path to the work item file (e.g., `{work_dir}/0042-my-work-item.md`)
2. (Optional) A work item number shorthand (e.g., `/review-work-item 42`)
3. (Optional) Focus areas to emphasise (e.g., "focus on testability")
Tip: Use `/list-work-items` to find the work item you want to review.
Then wait for the user's input.
| Lens | Lens Skill | Focus |
|---|---|---|
| Clarity | clarity-lens | Unambiguous referents, internal consistency, jargon handling |
| Completeness | completeness-lens | Section presence, content density, type-appropriate content |
| Dependency | dependency-lens | Implied couplings not captured — blockers, consumers, external systems |
| Scope | scope-lens | Right-sized, single coherent unit of work; decomposition; orthogonality |
| Testability | testability-lens | Measurable criteria, verifiable outcomes, verification framing |
Note: completeness flags an absent Dependencies section; dependency flags an empty or underspecified section whose contents fail to name every coupling the work item implies.
Read the work item file FULLY — never use limit/offset
Parse the frontmatter to note type (bug, story, spike, epic, etc.)
and status
Read any documents referenced in the References section — these provide context the lenses may need; do not read source code
Check for existing reviews: Glob for review documents matching
{work_reviews_dir}/{work-item-stem}-review-*.md. If any are found:
The new review creates a new file with the next review number (e.g.,
-review-2.md). Previous review files are never modified.
By default, run every lens registered in BUILTIN_WORK_ITEM_LENSES unless the
user has provided focus arguments or config restricts the selection. The five
work item lenses cover orthogonal concerns, so there is no relevance-based
auto-selection.
If the user provided focus arguments:
If no focus arguments were provided:
Run all built-in work item lenses unless:
disabled_lenses — remove it from the active setcore_lenses has filtered this to a subset (see below)When core_lenses is set in config, apply it as the minimum required set;
add any remaining non-disabled lenses up to max_lenses. This means users
who previously pinned core_lenses to the Phase 4 work item lenses
(completeness, testability, clarity) will also receive scope and
dependency on upgrade, unless they add those names to disabled_lenses or
set max_lenses to their subset size.
Present the selection briefly — enumerate the chosen lenses with a one-line focus each — then wait for confirmation before spawning reviewers. The confirmation gate is preserved even though the default always selects every lens; the gate is useful when focus args or config have narrowed the set.
Example (default path, no focus args, no core_lenses restriction):
I'll review this work item through all work item lenses (clarity, completeness,
dependency, scope, testability). Shall I proceed?
Wait for confirmation before spawning reviewers.
For each selected lens, spawn the {reviewer agent} agent with a prompt that includes the paths to the lens skill and output format files. Do NOT read these files yourself — the agent reads them in its own context.
Compose each agent's prompt following this template:
You are reviewing a work item through the [lens name] lens.
## Context
The work item is at [path]. Read it fully.
Also read any source documents listed in the work item's References section.
## Analysis Strategy
1. Read your lens skill and output format files (see paths below)
2. Read the work item file fully
3. Read referenced documents from the work item's References section if present
4. Evaluate the work item through your lens, applying each key question
5. Reference specific work item sections in your findings using the `location`
field (e.g., "Acceptance Criteria", "Requirements", "Frontmatter: type")
IMPORTANT: Do not evaluate the codebase — work item content (and any documents
it explicitly references) is the sole artefact under review. Do not run
codebase exploration agents or read source files unless the work item's
References section explicitly links to them.
## Lens
Read the lens skill at the path listed in the Lens Catalogue table in the
review configuration above. If no review configuration is present, use:
${CLAUDE_PLUGIN_ROOT}/skills/review/lenses/[lens]-lens/SKILL.md
## Output Format
Read the output format at:
${CLAUDE_PLUGIN_ROOT}/skills/review/output-formats/work-item-review-output-format/SKILL.md
IMPORTANT: Return your analysis as a single JSON code block. Do not include
prose outside the JSON block.
Spawn all selected agents in parallel using the Task tool with
subagent_type: "!${CLAUDE_PLUGIN_ROOT}/scripts/config-read-agent-name.sh reviewer".
IMPORTANT: Wait for ALL review agents to complete before proceeding.
Handling malformed agent output:
If an agent's response is not a clean JSON block, apply this extraction strategy:
Look for a JSON code block fenced with triple backticks (optionally with
a json language tag)
If found, extract and parse the content within the fences
If the extracted JSON is valid, use it normally
If no JSON code block is found, or the JSON within it is invalid, apply
the fallback: treat the agent's entire output as a single finding with
"suggestion" severity (marked synthetic: true), attributed to that
agent's lens
Note: "suggestion" severity is used here (not "major" as in
review-plan) so a single flaky agent cannot deterministically force a
REVISE verdict when work_item_revise_severity is major or higher.
When falling back, warn the user that the agent's output could not be parsed and present the raw agent output so the user can see what the agent found. Include remediation guidance: "Try re-running with a narrower lens selection, or file a bug with the raw output above."
Once all reviews are complete:
Parse agent outputs: Extract the JSON block from each agent's response
(see the extraction strategy in Step 3). Collect the summary, strengths,
and findings arrays from each.
Aggregate across agents:
findings arrays into a single liststrengths arrays into a single listsummary stringsDeduplicate findings: Where multiple agents flag the same section with similar concerns, consider merging — but only when the findings address the same underlying concern from different lens perspectives. When in doubt, keep findings separate.
When merging:
Prioritise findings:
Determine suggested verdict:
If review configuration provides verdict overrides above, apply those thresholds instead of the defaults below:
work_item_revise_severity is none, skip the severity-based REVISE
rule (major count rule still applies independently)REVISE"major" findings exist
→ suggest REVISECOMMENTAPPROVEVerdict meanings:
APPROVE — work item is ready for implementationREVISE — work item needs changes before implementationCOMMENT — observations only, work item is acceptable as-isWhen presenting a COMMENT verdict with major findings, note: "Work item is
acceptable but could be improved — see major findings below."
Identify cross-cutting themes: Look for findings that appear across multiple lenses — issues flagged by 2+ agents reinforce each other and should be highlighted in the summary.
Compose the review summary:
## Work Item Review: [Work item Title]
**Verdict:** [APPROVE | REVISE | COMMENT]
[Combined assessment: synthesise each agent's summary into 2-3 sentences
covering the overall quality of the work item across all lenses]
### Cross-Cutting Themes
[Issues that multiple lenses identified — these deserve the most attention]
- **[Theme]** (flagged by: [lenses]) — [description]
### Findings
#### Critical
- 🔴 **[Lens]**: [title]
**Location**: [work item section]
[First 1-2 sentences of body as summary]
#### Major
- 🟡 **[Lens]**: [title]
**Location**: [work item section]
[First 1-2 sentences of body as summary]
#### Minor
- 🔵 **[Lens]**: [title]
**Location**: [work item section]
[First 1-2 sentences of body as summary]
#### Suggestions
- 🔵 **[Lens]**: [title]
**Location**: [work item section]
[First 1-2 sentences of body as summary]
### Strengths
- ✅ [Aggregated and deduplicated strengths from all agents]
### Recommended Changes
[Ordered list of specific, actionable changes to the work item, prioritised by
impact. Each should reference the finding(s) it addresses.]
1. **[Change description]** (addresses: [finding titles])
[Specific guidance on what to modify in the work item]
---
*Review generated by /review-work-item*
Write the review artifact to {work_reviews_dir}/:
Derive the review filename using the work item stem and the next available
review number. The work item stem is the basename of the work item path without
the .md extension. For example, if the work item is
{work_dir}/0042-improve-search.md and no prior reviews exist,
the review filename is
{work_reviews_dir}/0042-improve-search-review-1.md.
To determine the next review number:
mkdir -p {work_reviews_dir}
# Glob for existing reviews of this work item
ls {work_reviews_dir}/{work-item-stem}-review-*.md 2>/dev/null
# Extract the highest number, increment by 1. If none exist, use 1.
Extract the work item's stable 4-digit identifier from its filename using
${CLAUDE_PLUGIN_ROOT}/skills/work/scripts/work-item-read-field.sh {path} number
(or parse the 4-digit prefix from the filename directly).
Write the review document with YAML frontmatter followed by the review summary composed in Step 4.7. Include the per-lens results as a final section:
---
date: "{ISO timestamp}"
type: work-item-review
skill: review-work-item
target: "{work_dir}/{work-item-stem}.md"
work_item_id: "{4-digit number, e.g. 0042}"
review_number: {N}
verdict: {APPROVE | REVISE | COMMENT}
lenses: [{list of lenses used}]
review_pass: 1
status: complete
---
{The full review summary from Step 4.7}
## Per-Lens Results
### {Lens 1 Name}
**Summary**: {agent summary}
**Strengths**:
{agent strengths}
**Findings**:
{agent findings — each with severity, confidence, location, and body}
### {Lens 2 Name}
...
The work_item_id field stores the work item's stable 4-digit identifier,
providing resilience against work item renames. target remains as the path
used at review time.
Present the composed review summary from Step 4.7 to the user.
After presenting, offer the user control before proceeding to iteration:
The review is complete. Verdict: [verdict]
Would you like to:
1. Proceed to address findings? (I'll help edit the work item)
2. Change the verdict? (currently: [verdict])
3. Discuss any specific findings in more detail?
4. Re-run specific lenses with adjusted focus?
After presenting the review:
Discuss findings with the user:
Edit the work item based on agreed changes:
status field — that is a separate workflow decisionSummarise changes made:
I've made the following changes to the work item:
- [Change 1] — addressing [finding]
- [Change 2] — addressing [finding]
- [Skipped] — [finding discussed and decided not to address, with reason]
After edits are complete:
The work item has been updated. Would you like me to run another review pass to
verify the changes address the findings? This will re-run the relevant lenses
to check for any remaining issues.
If the user accepts:
## Re-Review: [Work item Title]
**Verdict:** [APPROVE | REVISE | COMMENT]
### Previously Identified Issues
- [emoji] **[Lens]**: [title] — Resolved / Partially resolved / Still present
### New Issues Introduced
- [emoji] **[Lens]**: [title] — [brief description]
### Assessment
[Whether the work item is now ready for implementation or needs further iteration]
After composing the re-review summary, update the review artifact as a single write operation:
{work_reviews_dir}/{work-item-stem}-review-{N}.md-review-{N+1}.md file instead of
appending in placeverdict,
review_pass, and date — preserving all other fields and body
content verbatimThe document reads chronologically: initial review, per-lens results, then re-review sections in order. The frontmatter always reflects the latest verdict and pass count:
## Re-Review (Pass {N}) — {date}
**Verdict:** {verdict}
### Previously Identified Issues
- {emoji} **{Lens}**: {title} — {Resolved | Partially resolved | Still present}
### New Issues Introduced
- {emoji} **{Lens}**: {title} — {brief description}
### Assessment
{Whether the work item is now ready for implementation or needs further iteration}
If the user declines or the re-review shows all clear, the review is complete.
Read the work item fully before doing anything else
Spawn agents in parallel — the work item lenses are independent and should run concurrently for efficiency
Synthesise, don't concatenate — your value is in compiling a balanced view across lenses, identifying themes, and prioritising actionable recommendations
Do not modify the work item's status field — a REVISE verdict does not
automatically change the work item's status; that transition belongs to a
separate workflow decision by the team
Do not run codebase exploration agents — the reviewer agents stay inside the work item and any documents it explicitly references; source code is out of scope for work item review
Be balanced — highlight strengths alongside concerns
Prioritise by impact — structural issues that would block implementation matter more than surface-level polish
Handle malformed agent output gracefully — use the suggestion severity
fallback (not major) so a single flaky agent does not force a REVISE
verdict
Use emoji severity prefixes consistently — 🔴 critical, 🟡 major, 🔵 minor/suggestion, ✅ strengths. IMPORTANT: Use the actual Unicode emoji characters (🔴 🟡 🔵 ✅), NOT text shortcodes.
{work_reviews_dir}/ so the review is visible to the teamstatus field during reviewWork item review sits in the work item lifecycle between authoring and implementation:
/create-work-item or /extract-work-items — Author or capture the work item/list-work-items — Discover work items available for review/review-work-item — Review and iterate work item quality (this command)/update-work-item — Apply status transitions after review decisions/create-plan — Create an implementation plan from an approved work item!${CLAUDE_PLUGIN_ROOT}/scripts/config-read-skill-instructions.sh review-work-item