By punt-labs
Follow Amazon's Working Backwards process to generate, revise, and compile PR/FAQ documents as professional LaTeX PDFs for product idea evaluation. Run AI-powered review meetings with personas, incorporate feedback, perform peer reviews, streamline content, research evidence, and render go/no-go decisions.
npx claudepluginhub punt-labs/claude-plugins --plugin prfaqGenerate a stage-colored badge and embed it in your README
Export the PR/FAQ as a Word document (.docx) via pandoc — no TeX installation required
Generate an external press release from the PR/FAQ and CHANGELOG for a specific release
Tell us how the prfaq plugin is working for you (anonymous 1-5 feedback)
Incorporate feedback into PR/FAQ and redraft affected sections
Import an existing document and launch the full /prfaq workflow with extracted content
Run an autonomous PR/FAQ review meeting where four personas debate and reach consensus without user intervention
Play back a completed meeting summary as a voiced debate between personas
Run a simulated PR/FAQ review meeting with agentic personas
Research evidence for PR/FAQ claims and generate biblatex citations
Peer review a PR/FAQ document for quality and decision readiness
Tighten a PR/FAQ by removing redundancy, weasel words, and bloat
Assess whether a PR/FAQ should move forward with a structured go/no-go decision
Interprets directional feedback on a PR/FAQ document, traces cascading effects across all affected sections, and surgically redrafts content while maintaining document integrity. Use when the user provides specific feedback like "wrong persona", "TAM is overstated", or "differentiate on speed not features." Examples: <example> Context: User provides feedback after reviewing their PR/FAQ. user: "/prfaq:feedback the TAM is not focused on persona X, but Y" assistant: "I'll use the feedback agent to trace the impact of this persona change across the document." <commentary>Persona change cascades through press release, FAQs, risk assessment, and feature appendix.</commentary> </example> <example> Context: User receives stakeholder feedback. user: "/prfaq:feedback the competitive positioning is too weak — we differentiate on speed, not features" assistant: "I'll trace how that positioning change affects the press release and FAQs." <commentary>Positioning change affects lede, external FAQ Q2, competitive landscape FAQ, and value risk.</commentary> </example>
Dana — Builder-Visionary persona for /prfaq:meeting. Evaluates ambition risk and the cost of not building. Reads the PR/FAQ document section and returns a structured position: bigger opportunity being undersold, simplest version that captures core value, and APPROVE/ITERATE/REJECT verdict. Loads pr-structure.md and four-risks.md reference guides. Examples: <example> Context: The meeting skill is debating a feature appendix where most features are in Won't Do. assistant: "Launching Dana to challenge whether the scope is too conservative." <commentary>Dana pushes back on risk aversion and looks for the elegant simplification.</commentary> </example> <example> Context: The meeting skill is evaluating a competitive landscape FAQ that emphasizes risks. assistant: "Launching Dana to identify the bigger opportunity the competitive analysis is underselling." <commentary>Dana sees competitive gaps as opportunities, not just threats.</commentary> </example>
Priya — Target Customer persona for /prfaq:meeting. Evaluates value risk through the lens of customer reality. Reads the PR/FAQ document section and returns a structured position: concrete user scenario, what's missing from the customer perspective, and APPROVE/ITERATE/REJECT verdict. Loads ux-bar-raiser.md and common-mistakes.md reference guides. Examples: <example> Context: The meeting skill is debating a Customer Evidence FAQ that cites industry reports but no interviews. assistant: "Launching Priya to evaluate whether the customer evidence resonates with real user behavior." <commentary>Priya grounds the discussion in what a real customer would actually do.</commentary> </example> <example> Context: The meeting skill is evaluating a problem statement about developer productivity. assistant: "Launching Priya to react to the problem statement as the target customer." <commentary>Priya collapses abstractions into concrete daily experience.</commentary> </example>
Wei — Principal Engineer persona for /prfaq:meeting. Evaluates feasibility risk and technical honesty. Reads the PR/FAQ document section and returns a structured position: hardest unsolved problem, irreversible decisions, and APPROVE/ITERATE/REJECT verdict. Loads principal-engineer.md and four-risks.md reference guides. Examples: <example> Context: The meeting skill is debating a TAM calculation that assumes viral distribution. assistant: "Launching Wei to evaluate the feasibility claims in the TAM section." <commentary>Wei focuses on whether the viral coefficient claim is technically grounded.</commentary> </example> <example> Context: The meeting skill is evaluating a Getting Started section with a 3-step onboarding. assistant: "Launching Wei to assess whether the claimed onboarding simplicity is technically achievable." <commentary>Wei checks if the onboarding hides infrastructure complexity from the user.</commentary> </example>
Alex — Skeptical Executive persona for /prfaq:meeting. Evaluates value risk and strategic fit through devil's advocate lens. Reads the PR/FAQ document section and returns a structured position: biggest assumption with falsification test, opportunity cost challenge, and APPROVE/ITERATE/REJECT verdict. Loads decision-quality.md and common-mistakes.md reference guides. Examples: <example> Context: The meeting skill is debating a TAM calculation claiming 500K potential users. assistant: "Launching Alex to challenge the TAM assumptions and opportunity cost." <commentary>Alex asks "500K who could or who would?" and compares to other uses of the team's time.</commentary> </example> <example> Context: The meeting skill is evaluating a risk assessment where all risks are rated Low. assistant: "Launching Alex to challenge the uniformly optimistic risk ratings." <commentary>Alex treats uniform optimism as a red flag for suppressed dissent.</commentary> </example>
Critical peer reviewer for PR/FAQ documents. Reviews .tex drafts against Working Backwards principles, the Kahneman decision quality framework, and project reference guides. Flags unsupported claims, cognitive biases, ambiguous language, risk rating inconsistencies, and citation gaps. Use proactively after drafting a PR/FAQ (Phase 3b) or when the user asks to review a PR/FAQ document. Also invoked by the /prfaq review command. Examples: <example> Context: The main prfaq skill has finished drafting the FAQ and risk assessment. assistant: "The draft is complete. Let me invoke the peer reviewer to evaluate it before compilation." <commentary>Auto-invoked after Phase 3b in the skill workflow.</commentary> </example> <example> Context: User has manually edited their prfaq.tex and wants feedback. user: "Can you review my PR/FAQ?" assistant: "I'll use the peer-reviewer agent to evaluate your document." <commentary>Standalone invocation via natural language or /prfaq review command.</commentary> </example>
Research librarian for PR/FAQ documents. Given claims or topics, searches for supporting evidence across local files, web sources, and optional MCP data providers. Returns structured biblatex citations ready to append to a .bib file. Use during Phase 0 research discovery or standalone via /prfaq research. Examples: <example> Context: The main prfaq skill is starting Phase 0 and found research files. assistant: "Let me invoke the researcher agent to find evidence for the key claims." <commentary>Auto-invoked during Phase 0 of the skill workflow.</commentary> </example> <example> Context: User wants to find evidence for a specific claim in their PR/FAQ. user: "Find me evidence that developers lack product training" assistant: "I'll use the researcher agent to search for supporting data." <commentary>Standalone invocation via natural language or /prfaq research command.</commentary> </example>
Scalpel editor for PR/FAQ documents. Removes redundancy across sections, eliminates weasel words and hollow adjectives, applies the "so what" test to every sentence, and compresses inflated phrases. Reduces document length by 10–20% while increasing focus. Does not touch evidence, citations, customer quotes, risk assessments, numbers, or structural elements. Examples: <example> Context: User has finished iterating on their PR/FAQ and wants to tighten it. user: "/prfaq:streamline" assistant: "I'll use the streamliner agent to tighten the document." <commentary>Invoked after iteration is complete, before final review or sharing.</commentary> </example> <example> Context: User wants to cut a specific section that feels bloated. user: "/prfaq:streamline the TAM section is too wordy" assistant: "I'll focus the streamliner on the TAM FAQ." <commentary>Can target specific sections when the user identifies bloat.</commentary> </example>
Team-oriented workflow plugin with role agents, 27 specialist agents, ECC-inspired commands, layered rules, and hooks skeleton.
Uses power tools
Uses Bash, Write, or Edit tools
Upstash Context7 MCP server for up-to-date documentation lookup. Pull version-specific documentation and code examples directly from source repositories into your LLM context.
Semantic search for Claude Code conversations. Remember past discussions, decisions, and patterns.
Comprehensive startup business analysis with market sizing (TAM/SAM/SOM), financial modeling, team planning, and strategic research
Comprehensive .NET development skills for modern C#, ASP.NET, MAUI, Blazor, Aspire, EF Core, Native AOT, testing, security, performance optimization, CI/CD, and cloud-native applications
Comprehensive PR review agents specializing in comments, tests, error handling, type design, code quality, and code simplification