Structured critique of research, plan, and brainstorm documents for completeness, gaps, and quality.
From despleganpx claudepluginhub desplega-ai/ai-toolbox --plugin desplegaThis skill uses the workspace's default tool permissions.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Guides agent creation for Claude Code plugins with file templates, frontmatter specs (name, description, model), triggering examples, system prompts, and best practices.
You are performing a structured critique of a document (research, plan, or brainstorm) to identify gaps, weaknesses, and quality issues.
These instructions establish a working agreement between you and the user. The key principles are:
AskUserQuestion is your primary communication tool - Whenever you need to ask the user anything (clarifications, preferences, decisions), use the AskUserQuestion tool. Don't output questions as plain text - always use the structured tool so the user can respond efficiently.
Establish preferences upfront - Ask about user preferences at the start of the workflow, not at the end when they may want to move on.
Autonomy mode guides interaction level - The user's chosen autonomy level determines how often you check in, but AskUserQuestion remains the mechanism for all questions.
Before starting review (unless autonomy is Autopilot), establish these preferences:
Output Mode - Use AskUserQuestion with:
| Question | Options |
|---|---|
| "How should I present the review findings?" | 1. Append errata section to the document (Recommended), 2. Auto-apply fixes to the document, 3. Write a separate review file to thoughts/*/reviews/ |
File Review Preference - Check if the file-review plugin is available (look for file-review:file-review in available commands).
If file-review plugin is installed, use AskUserQuestion with:
| Question | Options |
|---|---|
| "Would you like to use file-review for inline feedback after the automated review?" | 1. Yes, open file-review after review (Recommended), 2. No, the automated review is sufficient |
Store these preferences and act on them during the review process.
This skill activates when:
/review commandAt the start of review, adapt your interaction level based on the autonomy mode:
| Mode | Behavior |
|---|---|
| Autopilot | Run full review, auto-fix minor issues, present summary at end |
| Critical (Default) | Ask about Critical/Important findings, auto-fix Minor ones |
| Verbose | Walk through each finding, confirm before any changes |
The autonomy mode is passed by the invoking command. If not specified, default to Critical.
Read the input document fully. Determine the document type from its path and content structure:
| Path contains | Type |
|---|---|
/research/ | Research document |
/plans/ | Plan |
/brainstorms/ | Brainstorm |
/qa/ | QA report |
If the type is ambiguous, infer from content structure or use AskUserQuestion to clarify.
Verify required sections exist based on document type:
Research documents:
Plans:
Brainstorms:
QA reports:
Apply type-specific quality criteria:
Research documents:
Plans:
Brainstorms:
QA reports:
Look for what's missing or assumed:
Categorize all findings into three severity levels:
| Severity | Meaning | Action |
|---|---|---|
| Critical | Blocks correctness or completeness — must be addressed | Discuss with user |
| Important | Significant gap or weakness — should be addressed | Discuss with user (or auto-fix in Autopilot) |
| Minor | Formatting, typos, small inconsistencies | Auto-fix unless Verbose mode |
Present a summary as text output with findings grouped by severity.
Based on output mode preference:
If "Append errata":
## Review Errata section at the end of the document with:
## Review Errata
_Reviewed: YYYY-MM-DD by [reviewer]_
### Critical
- [ ] [Finding description and recommended action]
### Important
- [ ] [Finding description and recommended action]
### Resolved
- [x] [Minor issue] — auto-fixed
If "Auto-apply":
| Question | Options |
|---|---|
| "There are [N] Critical findings. Would you like me to auto-apply fixes for those too?" | 1. Yes, apply Critical fixes too, 2. No, leave Critical items as errata for me to address |
## Review Errata section summarizing all changes:
## Review Errata
_Reviewed: YYYY-MM-DD by [reviewer]_
### Applied
- [x] [Finding description] — auto-applied
- [x] [Finding description] — auto-applied
### Remaining (if any Critical items were not auto-applied)
- [ ] [Critical finding description and recommended action]
If "Separate file":
thoughts/*/reviews/YYYY-MM-DD-review-of-<original-slug>.mdAfter the review is complete, determine the document type (from frontmatter, file path, or content) and propose the appropriate next step.
Use AskUserQuestion with context-dependent options:
If reviewing a brainstorm document:
| Question | Options |
|---|---|
| "Review complete. What's next for this brainstorm?" | 1. Start research (→ /research), 2. Create a plan directly (→ /create-plan), 3. Done for now |
If reviewing a research document:
| Question | Options |
|---|---|
| "Review complete. What's next for this research?" | 1. Create a plan (→ /create-plan), 2. Done for now |
If reviewing a plan document:
| Question | Options |
|---|---|
| "Review complete. What's next for this plan?" | 1. Start implementation (→ /implement-plan), 2. Done for now |
If reviewing a post-implementation verification:
| Question | Options |
|---|---|
| "Review complete. What's next?" | 1. Done — mark as complete, 2. Address remaining items |
If reviewing a QA report:
| Question | Options |
|---|---|
| "Review complete. What's next for this QA report?" | 1. Run post-QA verification (→ /verify-plan), 2. Address issues found, 3. Done |
If document type is unclear, ask a generic question:
| Question | Options |
|---|---|
| "Review complete. Would you like to proceed to the next workflow step?" | 1. Yes, suggest next step, 2. Done for now |
OPTIONAL SUB-SKILL: If significant insights, patterns, gotchas, or decisions emerged during this workflow, consider using desplega:learning to capture them via /learning capture. Focus on learnings that would help someone else in a future session.
CRITICAL: The reviewer identifies issues — the reviewer does NOT rewrite the document. Present findings and let the original author address them. Exceptions:
If the file-review plugin is available and the user selected "Yes" during User Preferences setup:
/file-review:file-review <path> for inline human commentsfile-review:process-review skill