Run parallel multi-persona critiques on documents with versioning and synthesis
From product-playbook-for-agentic-codingnpx claudepluginhub daviswhitehead/product-playbook-for-agentic-coding-plugin --plugin product-playbook-for-agentic-coding<path> [--personas list] [--version N] [--rerun]workflows//critiqueCritiques your UI build for design craft—rhythm, proportions, focal points, spacing, typography, surfaces, interactions—and rebuilds defaulted elements.
/critiqueEvaluate design effectiveness from a UX perspective. Assesses visual hierarchy, information architecture, emotional resonance, and overall design quality with actionable feedback.
/critiqueCritiques your UI build for design craft—rhythm, proportions, focal points, spacing, typography, surfaces, interactions—and rebuilds defaulted elements.
/critiqueCritiques your UI build for design craft—rhythm, proportions, focal points, spacing, typography, surfaces, interactions—and rebuilds defaulted elements.
/critiqueSpawn parallel senior analyst agents to critically review all research for events from previous scans
You are orchestrating a parallel multi-persona critique workflow. Multiple AI agents with distinct perspectives review documents simultaneously, then findings are synthesized into a prioritized action plan.
Help the user run structured critiques of their documents by:
Before proceeding, inventory available tools:
/playbook:* commands (learnings, tasks)Select the most appropriate tools for the task at hand.
Parse the user's input for:
docs/foundations/)[output]/archive/ after synthesis — only the synthesis and issue tracker remain in the output directory.Reference these persona definitions from resources/personas/:
| Persona | Best For |
|---|---|
marketing-strategist | Messaging, positioning, competitive differentiation |
product-manager | Internal consistency, requirements clarity, prioritization |
technical-reviewer | Feasibility, architecture, technical accuracy |
domain-expert | Domain-specific accuracy and credibility (customize for domain) |
investor | Business viability, market opportunity, defensibility |
1.1 Parse Arguments
Extract path, personas, version, and options from user input.
1.2 Discover Documents
ls [path]/*.md
Report: "Found X documents to critique"
1.3 Detect Version
Check for existing critique files to determine version:
ls [output]/*critique*.md 2>/dev/null | grep -o 'v[0-9]*' | sort -V | tail -1
If --rerun, increment from detected version.
If --version specified, use that.
Otherwise, use v1 or increment from existing.
1.4 Select Personas
If --personas specified, use those.
Otherwise, recommend based on content type:
Recommendation: The critique phase is HIGH-ROI. Always run it after PRD and tech plan — it catches issues that are expensive to fix later (e.g., WCAG failures, re-render bugs, flaky tests). For UI projects, always include accessibility-expert and design-system-architect personas.
Present selection to user:
📋 Critique Setup
Documents: [path] (X files)
Version: v[N]
Personas: [list]
Output: [output path]
Proceed with critique?
CHECKPOINT: Get user confirmation before launching agents.
2.1 Launch Agents
For each persona, launch a Task agent with:
resources/personas/[name].md)Launch ALL personas in PARALLEL using multiple Task tool calls in a single message.
2.2 Agent Instructions Template
You are a [PERSONA NAME] conducting an intensive critique of [DOCUMENT SET].
## Your Perspective
[Insert persona definition from resources/personas/[name].md]
## Documents to Review
[List all .md files in the path]
## Output Format
Write your critique to: [output]/critique-v[N]-[persona-slug].md
Structure your critique as:
# [Persona Name] Critique: [Document Set] - v[N]
## Executive Summary
[2-3 sentences on overall assessment from your perspective]
## [Your Focus Area 1]
[Issues found]
## [Your Focus Area 2]
[Issues found]
## [Continue for each focus area from persona definition]
## Credibility Concerns
[Claims that feel unearned or risky]
## Specific Recommendations
### High Priority
1. [Recommendation with file reference]
2. [Recommendation with file reference]
### Medium Priority
3. [Recommendation]
### Lower Priority
4. [Recommendation]
---
*Critique by [Persona Name] - v[N] - [Date]*
2.3 Wait for Completion
Monitor all agents until complete. Report progress.
3.1 Read All Critiques
Read each persona's critique document.
3.2 Identify Common Themes
Find issues flagged by multiple personas—these are highest priority.
3.3 Prioritize Issues
3.4 Generate Synthesis
Create synthesis document using template from resources/templates/critique-synthesis.md:
[output]/critique-v[N]-synthesis.md3.5 Update Issue Tracker
If [output]/critique-issue-tracker.md exists:
If it doesn't exist and this is v1:
resources/templates/critique-issue-tracker.mdUnless --keep-perspectives is specified, archive individual persona critique files after synthesis:
mkdir -p [output]/archive
mv [output]/critique-v[N]-*.md [output]/archive/ 2>/dev/null
mv [output]/archive/critique-v[N]-synthesis.md [output]/ 2>/dev/null
This keeps the output directory clean — only the synthesis and issue tracker remain as the primary deliverables. Individual perspectives are preserved in archive/ for reference.
Why this is the default: Retrospective analysis of real projects found that individual perspective documents (often 50%+ of project doc volume) were never referenced during implementation. The synthesis captures all actionable findings. Archiving rather than deleting preserves the supporting evidence without cluttering the working directory.
✅ Critique Complete: v[N]
## Summary
- Documents reviewed: X
- Personas used: [list]
- Issues found: Y total (A P0, B P1, C P2)
## P0 Issues (Must Fix)
1. [Issue 1] - flagged by [personas]
2. [Issue 2] - flagged by [personas]
## Changes from v[N-1]
- Resolved: X issues
- New: Y issues
- Persistent: Z issues
## Files Created
- [output]/critique-v[N]-synthesis.md
- [output]/critique-issue-tracker.md (created/updated)
- [output]/archive/critique-v[N]-[persona1].md (archived)
- [output]/archive/critique-v[N]-[persona2].md (archived)
## Next Steps
1. Review synthesis document
2. Implement P0 fixes (see Action Plan in synthesis)
3. Run `/playbook:critique [path] --rerun` after fixes
Would you like me to generate a tasks.md from the P0 items?
When --rerun is specified:
Example: critique docs/foundations/ --rerun
/playbook:critique docs/foundations/
Auto-selects personas, creates v1 critique.
/playbook:critique docs/api-spec/ --personas technical-reviewer,product-manager
/playbook:critique docs/foundations/ --rerun
Increments to next version, compares to previous.
/playbook:critique docs/foundations/ --version 3
/playbook:critique docs/foundations/ --keep-perspectives
Skips archiving — all individual persona files remain in the output directory alongside the synthesis.
Always launch persona agents in parallel for speed. Use a single message with multiple Task tool calls.
Every critique run should have a version. This enables tracking progress over iterations.
The synthesis is the primary output. Individual critiques are supporting evidence and are archived by default. Use --keep-perspectives if you need them in the output directory.
Issues appearing in 3+ versions need dedicated resolution—they indicate a deeper problem.
Use the Launch Readiness Checklist to define "done" before starting iterations.
The Product Manager persona should specifically flag subjective acceptance criteria that an agent cannot verify:
No .md files found at [path].
Please specify a path containing markdown documents to critique.
Persona '[name]' not found in resources/personas/.
Available personas:
- marketing-strategist
- product-manager
- technical-reviewer
- domain-expert
- investor
If a persona agent fails, report which one failed and offer to:
Run structured, multi-perspective critiques with version tracking and synthesis.