From csdlc
Structured review of sub-agent output (CSDLC Step 4). Run the AI Lead review checklist before presenting work to the human. Use when the user says "review", "check this", "did the intern do it right", or after a sub-agent completes a story.
npx claudepluginhub danhannah94/claymore-plugins --plugin csdlcThis skill uses the workspace's default tool permissions.
Review the output from: **$ARGUMENTS**
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Checks Next.js compilation errors using a running Turbopack dev server after code edits. Fixes actionable issues before reporting complete. Replaces `next build`.
Review the output from: $ARGUMENTS
Before presenting work to the human, the AI Lead reviews it against the story's acceptance criteria and quality standards. This is the gate between "the intern did something" and "the human should look at this."
Identify what to review. $ARGUMENTS is either:
ghgit diff main..{branch}Load the story's acceptance criteria. Find the original story (from Foundry or local doc) and pull the acceptance criteria checklist.
Run the review checklist. For each item, mark pass/fail with evidence:
Produce the review report.
## Review Report: {Story ID}
### Verdict: PASS / FAIL / PASS WITH NOTES
### Acceptance Criteria
- [x] {criterion 1} — verified by {evidence}
- [x] {criterion 2} — verified by {evidence}
- [ ] {criterion 3} — FAILED: {what's wrong}
### Scope Check
- Files modified: {list}
- Out-of-scope changes: {none / list}
- Boundary violations: {none / list}
### Quality Notes
- {Any concerns, suggestions, or things the human should look at}
### Test Results
- Build: pass/fail
- Tests: {N} passed, {M} failed
- New tests added: {count}
### Recommendation
{Ready for human review / Needs fixes first / Needs discussion}
Write result to Foundry. Post the review report as an annotation on the story doc in Foundry using create_annotation.
If FAIL: Identify specific fixes needed. Offer to either fix them directly or re-dispatch to the intern with the fix instructions.
If PASS: Suggest /present to package for human review, or proceed to the next story.