This skill should be used when the user asks to "write a PR/FAQ", "prfaq", "working backwards", "product discovery", "evaluate a product idea", "press release FAQ", "test product value", "revise prfaq", "update prfaq", "add research to prfaq", "add FAQs", "run a meeting", "review meeting", "hive meeting", "autonomous meeting", "consensus meeting", "stress test my prfaq", "go/no-go decision", "should we build this", "vote on prfaq", or wants to use the Amazon Working Backwards process to evaluate whether a product or feature is worth building.
Generates professional PR/FAQ documents using the Amazon Working Backwards process to evaluate product ideas and make go/no-go decisions.
npx claudepluginhub punt-labs/prfaqThis skill is limited to using the following tools:
references/common-mistakes.mdreferences/decision-quality.mdreferences/faq-structure.mdreferences/four-risks.mdreferences/meeting-guide.mdreferences/pr-structure.mdreferences/precise-writing.mdreferences/principal-engineer.mdreferences/unit-economics.mdreferences/ux-bar-raiser.mdGuide the user through the Amazon Working Backwards process to produce a professional PR/FAQ document. The output is a LaTeX file that compiles to a polished PDF suitable for executive review and product decision-making. The process forces clarity about customer value, surfaces risks early, and creates a shared artifact for go/no-go decisions.
Before starting the full workflow, check if a prfaq.tex file already exists in the project root (or the path the user specifies). If it does, enter revise mode instead of starting from scratch.
Read the existing document. Parse the .tex file to understand what's already written — the press release, FAQs, and risk assessment.
Ask what to revise. Present the user with the sections found and ask what they want to improve. Common revision goals:
Edit surgically. Only modify the sections the user wants changed. Read the existing .tex file, make targeted edits, and preserve everything else. Do not regenerate the entire document.
Recompile and review. After edits, run the compile script and present the changes for review. If the compile script reports overfull hbox warnings, fix them before presenting (see Phase 4 for common fixes). Offer to re-run the peer reviewer (Phase 3c) on the revised document. Offer further iteration.
If the user explicitly asks to start a new PR/FAQ (not revise), proceed to the full workflow below even if a document exists.
When working with an existing document (revise mode), read the .tex file and extract:
\prfaqstage{value}. Valid stages: hypothesis, validated, growth. If absent, assume hypothesis.\prfaqversion{major}{minor}. If absent, add \prfaqversion{1}{0} to the preamble.Stage affects evidence expectations throughout the workflow:
[CITATION NEEDED] markers are acknowledged gaps, not failures.The press release customer quote is fictional at all stages — it is an aspirational portrait of the customer experience, not a real testimonial. At hypothesis stage, it needs only to be plausible. At validated stage, it should be informed by interview insights. At growth stage, it should be informed by real usage patterns. The evidence for customer demand lives in the FAQ section, never in the press release quote. See pr-structure.md for details.
Before asking the user any questions, look for primary research that should ground the entire document.
Scan for research materials. Use Glob to check for ./research/**/* in the project root. If the directory exists, list the files found (PDFs, markdown, text, etc.).
If research files are found:
.md, .txt, .pdf; skip binary files that aren't PDFs)../research/. Here's what they contain..." Ask which are relevant to this PR/FAQ.If no research directory exists:
./research/ or paste key findings directly."Invoke the researcher agent. Use the Task tool with subagent_type: "prfaq:researcher". Pass the user's product description and any specific claims or topics to investigate. The researcher autonomously discovers ./research/ files (including its own prior results), searches the web for claims not already cached, and queries any available MCP data providers (quarry-mcp, financial data servers, etc.). Results are persisted to ./research/ for future runs.
The researcher returns three sections:
After receiving the results:
.bib file (same directory and basename as the .tex file — e.g., prfaq.tex → prfaq.bib) with the bibliography entries the researcher returned./prfaq research to verify specific claims or find additional evidence.Carry forward. Compile all discovered research into a working context that you reference throughout Phases 1–5. Specifically thread research evidence into:
Before writing anything, gather the inputs that make a PR/FAQ credible. If Phase 0 found research, use it to sharpen these questions — ask about specific customer segments, pain points, or competitors surfaced in the research rather than asking generic questions.
Ask the user these questions (adapt based on what they've already shared and what research revealed):
Stage — What stage is this product at? Ask via AskUserQuestion with three options:
Set \prfaqstage{value} in the .tex preamble based on the answer (always lowercase: hypothesis, validated, or growth). This calibrates evidence expectations for peer review and meetings.
Customer — Who is the specific target customer? What is their role, context, and daily reality?
Problem — What problem does this customer have today? How do they currently cope? What makes existing solutions inadequate?
Solution — What is the product or feature? How does it work at a high level?
Differentiation — Why is this better than what exists? What is the unique insight or approach?
Market — How large is the opportunity? What evidence exists for demand?
Risks — What could go wrong? What assumptions are untested?
Do not proceed until you have clear answers for at least stage, customer, problem, and solution. The other inputs can be developed during drafting.
Read the LaTeX template from ${CLAUDE_PLUGIN_ROOT}/assets/prfaq-template.tex. Read the PR section guide from ${CLAUDE_PLUGIN_ROOT}/skills/prfaq/references/pr-structure.md.
Write each section of the press release using the user's discovery answers:
marketplace.json owner name), not a made-up name. The spokesperson is a real person at the company speaking about their vision.Write the LaTeX content into a .tex file in the user's project directory (default: prfaq.tex in the project root, or a path the user specifies). Replace all placeholder text in the template with generated content. Uncomment the \addbibresource line in the preamble and set it to the .bib filename. Set \prfaqversion{1}{0} — initial generation is always v1.0.
When writing any factual claim — market sizes, statistics, customer behaviors, competitor capabilities, framework attributions — use \cite{key} referencing the corresponding .bib entry. If a claim has no source, write [CITATION NEEDED] as a visible marker and flag it for the user during review.
When the press release makes a judgment call — a claim about the market, a design choice, a positioning decision — cross-reference the FAQ that explains the reasoning: (see \faqref{faq:slug}). This renders as a clickable "FAQ 7" link — the number tells the reader exactly which question to find. The corresponding FAQ should unpack the judgment: why we believe this, what evidence supports it, and what the risk is if we're wrong.
After writing, share each section with the user for review. Ask for corrections before proceeding.
Read ${CLAUDE_PLUGIN_ROOT}/skills/prfaq/references/common-mistakes.md and check the draft against known anti-patterns. Flag any issues.
Read the FAQ section guide from ${CLAUDE_PLUGIN_ROOT}/skills/prfaq/references/faq-structure.md. Read the four risks framework from ${CLAUDE_PLUGIN_ROOT}/skills/prfaq/references/four-risks.md.
Generate two categories of FAQs:
External FAQs (customer-facing):
Internal FAQs (business-facing), organized by:
Internal FAQs are the evidence-heavy part of the document. Every factual assertion should use \cite{key}:
If a .bib entry doesn't exist for a claim, create one. If no source exists at all, mark [CITATION NEEDED].
Label each FAQ pair that explains a judgment call or provides supporting evidence for a press release claim: \label{faq:slug} immediately after \begin{faqpair}{Question}. Use descriptive slugs: faq:customer-evidence, faq:tam, faq:competitors, faq:why-latex. The press release should already reference these labels via \faqref{faq:slug}.
After the FAQs, fill in the four risks assessment (Value, Usability, Feasibility, Viability) based on everything gathered so far.
After the risk assessment, write the Feature Appendix — a scope boundary that categorizes every feature into:
Each feature entry uses the \featureitem{Name}{Rationale} command inside an enumerate environment. Features are numbered continuously (F1, F2, F3...) across all three categories. Add \label{feat:slug} after each \featureitem to make it referenceable. Use \featureref{feat:slug} from other sections (press release, FAQs) to create clickable "Feature 3" links — the same pattern as \faqref for FAQ pairs. The Won't Do rationale should explain why not — is it out of scope, a distraction, technically infeasible, or a deliberate positioning choice?
Example:
\begin{enumerate}[nosep,leftmargin=2.5em]
\featureitem{Discovery workflow}{structured questions that guide the user}\label{feat:discovery}
\featureitem{PDF compilation}{shareable artifact, not a disposable brainstorm}\label{feat:latex}
\end{enumerate}
Append the FAQ, risk assessment, and feature appendix sections to the .tex file. Share with the user for review.
Invoke the peer-reviewer agent to critically evaluate the completed draft. Use the Task tool with subagent_type: "prfaq:peer-reviewer", passing the path to the .tex file.
The peer reviewer reads the draft, loads all reference guides (including the Kahneman decision quality framework), checks available evidence in ./research/ and via web search, and returns structured feedback: critical issues, warnings, strengths, and recommendations. The reviewer reads \prfaqstage{} from the document and calibrates its evidence expectations accordingly — see the Stage Calibration sections in each reference guide.
Resolution loop:
.tex file and re-run the peer reviewer.If the assessment is REJECT with critical issues, do not proceed to compilation until the critical issues are resolved or the user explicitly overrides.
Run the compile script to produce the PDF:
bash ${CLAUDE_PLUGIN_ROOT}/scripts/compile_prfaq.sh <path-to-tex-file>
If compilation fails, read the LaTeX log, fix the issue, and recompile.
The compile script reports overfull hbox warnings (content extending beyond page margins). If any are reported, fix them before proceeding — these produce visible layout defects in the PDF. Common fixes:
\texttt{} strings: move to a display line with {\small\texttt{...}}\texttt{} commands close together: add a natural-language phrase between themRecompile until zero overfull hbox warnings remain. Report the output PDF path to the user.
Evaluate the completed PR/FAQ against these review criteria (from ${CLAUDE_PLUGIN_ROOT}/skills/prfaq/references/four-risks.md):
Present the assessment honestly. Identify the weakest sections and suggest specific improvements. Offer to iterate on any section.
prfaq.tex (or user-specified path) in the project directory.tex file/prfaq:import — Import an existing document (markdown, text, PDF) and launch the full /prfaq workflow with extracted content as a head start. Parses the source, maps ideas to discovery inputs, confirms with the user, then runs the same generation pipeline (research, draft, review, compile). The source accelerates the conversation — it does not bypass it.
/prfaq:externalize — Generate an external press release from the PR/FAQ and CHANGELOG for a specific release. Detects whether this is a first release, major update, or minor/patch and adapts tone, structure, and length accordingly. Scopes content to what actually shipped (CHANGELOG entries + Feature Appendix shipped items). Customer quotes are flagged for replacement with real testimonials.
/prfaq:badge — Embed a stage-colored shields.io badge in the project's README. The badge shows "Working Backwards | stage" and links to the compiled PDF. Colors: hypothesis (grey), validated (blue), growth (green). Updates in place when the stage changes.
After completing a PR/FAQ, the user can stress-test it with these commands:
/prfaq:review — Static peer review. Returns a structured report (PASS/ITERATE/REJECT) with critical issues, warnings, and recommendations. Good for identifying problems.
/prfaq:meeting — Interactive review meeting. Four personas (principal engineer, target customer, skeptical executive, builder-visionary) debate the document's hot spots. The user makes explicit tradeoff decisions at each disagreement. Good for forcing decisions. Output is a decisions log with specific revision directives.
/prfaq:meeting-hive — Autonomous consensus meeting. Same four personas, but they debate and reach consensus without user moderation. Uses one-way/two-way door framework to weight caution vs. action. Only escalates to the user on persistent splits over irreversible decisions. Requires Agent Teams (enabled via .claude/settings.json).
/prfaq:feedback — Directed iteration. Takes a specific feedback directive (from the user or from a meeting's revision queue), traces cascading effects, and surgically redrafts affected sections.
/prfaq:streamline — Scalpel editor. Removes redundancy across sections, eliminates weasel words and hollow adjectives, compresses inflated phrases, and applies the "so what" test. Targets 10–20% length reduction without losing meaning. Best used after iteration is complete, before sharing.
/prfaq:vote — Go/no-go decision. Reads the document's own evidence (risk ratings, FAQs, citations, feature scope) and assesses three gates: (1) customer problem worth solving, (2) differentiated solution, (3) should we do this now. Binary verdict — GO or NO-GO — with evidence trail. Gate 1 is a hard prerequisite. Supports multi-document portfolio comparison.
/prfaq:feedback-to-us — Anonymous satisfaction feedback. Quick 1-5 rating with optional comment to help improve the plugin.
Typical flow: /prfaq (or /prfaq:import) → /prfaq:badge → /prfaq:review → /prfaq:meeting (or :meeting-hive) → /prfaq:feedback (repeat) → /prfaq:streamline (when ready to share) → /prfaq:vote (go/no-go decision) → /prfaq:externalize (when ready to announce).
Detailed guidance for each phase is in the reference files:
${CLAUDE_PLUGIN_ROOT}/skills/prfaq/references/pr-structure.md — Section-by-section press release guide${CLAUDE_PLUGIN_ROOT}/skills/prfaq/references/faq-structure.md — FAQ section guide (external + internal)${CLAUDE_PLUGIN_ROOT}/skills/prfaq/references/four-risks.md — Cagan four risks framework, review criteria, decision outcomes${CLAUDE_PLUGIN_ROOT}/skills/prfaq/references/common-mistakes.md — Anti-patterns and failure modes${CLAUDE_PLUGIN_ROOT}/skills/prfaq/references/decision-quality.md — Kahneman decision quality checklist for peer review${CLAUDE_PLUGIN_ROOT}/skills/prfaq/references/meeting-guide.md — Meeting orchestration: personas, debate synthesis, decision flow${CLAUDE_PLUGIN_ROOT}/skills/prfaq/references/precise-writing.md — Precise writing rules for streamlining: redundancy, weasel words, "so what" testThis skill should be used when the user asks about libraries, frameworks, API references, or needs code examples. Activates for setup questions, code generation involving libraries, or mentions of specific frameworks like React, Vue, Next.js, Prisma, Supabase, etc.
UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 9 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind, shadcn/ui). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient. Integrations: shadcn/ui MCP for component search and examples.