From strategy-consultant
Synthesize research findings into a coherent storyline that answers the client question. Use when someone asks to "synthesize these findings", "build the storyline", "what's the narrative", "put this together into a story", "structure the argument", "what should we tell the client", or when the analytical workflow moves from sense-checked research to constructing the client-facing argument. Also trigger when someone has a collection of findings and needs help turning them into a logical, persuasive narrative.
npx claudepluginhub chipalexandru/strategy-consultantThis skill uses the workspace's default tool permissions.
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Calculates TAM/SAM/SOM using top-down, bottom-up, and value theory methodologies for market sizing, revenue estimation, and startup validation.
Synthesis is not summarizing. Summarizing says "here is what the data shows." Synthesis says "here is what it means for the client's decision — and here is what they should do."
Take the validated, sense-checked research findings and organize them into the storyline that most directly and persuasively answers the client question.
Do NOT force findings into a predetermined framework. The storyline structure should emerge from the logic of the answer itself. Some client questions are best answered with:
Choose the structure that makes the argument clearest for THIS specific question. If the natural logic of the answer suggests a structure, use it. If not, default to: lead with the answer, support with evidence, address the counter-argument.
Before writing a single word, re-read the Precision Anchor from the problem-definition phase. Copy it to the top of your working notes. Then answer these three questions:
Altitude check on the governing message: Place the Precision Anchor success metric next to the governing message. Does the answer operate at the same level of specificity? If the success metric says "specific screen counts by store size" and the governing message says "retailers are deploying thousands of screens," the altitude is wrong. Rewrite the governing message to either (a) answer at the right altitude if the evidence supports it, or (b) explicitly state the altitude gap: "Public data provides network-level benchmarks but not per-store deployment specs by store size; expert interviews are needed to build DFI's archetypes."
A governing message that operates at the wrong altitude will produce a report that sounds authoritative but doesn't actually help the client make their decision.
Re-read the Deliverable Blueprint from the Precision Anchor. If the blueprint describes a document where the value is coverage and completeness, the governing message should frame what the coverage reveals, not force a single-action recommendation. The storyline structure should take the blueprint's structure as its starting point.
Write the answer to the client question in one sentence. This is the governing message of the entire storyline. If you cannot write it in one sentence, the thinking is not sharp enough yet.
Test the answer:
Distill the evidence into the 2-4 arguments that most powerfully support the answer. Each argument should:
Prioritize against the Deliverable Blueprint. Findings that fill a dimension the client needs belong in the main storyline even if they don't individually drive a single recommendation. Findings that don't serve the blueprint's structure go to the Value-Add Review (Step 2.5) before being sent to the appendix — some may deserve their own section.
The precision architecture (Steps 0-2) ensures the client's question gets answered. This step ensures the deliverable doesn't stop there.
Review the full research base — analyst memos, validated findings, expert interview extractions — for insights that fall outside the Precision Anchor but would materially elevate the deliverable.
The test for inclusion: Would the client, after reading the deliverable, say "I'm glad they included this even though I didn't ask for it"? If yes, it belongs in the report as a supplementary section, not in the appendix.
What to look for: Do not prescribe specific categories — the research itself reveals what is valuable. Scan the full research base (including [ADJACENT] findings from the validator, expert content that didn't map to a sub-question, and deprioritized research threads) and ask: "Is there anything here that a knowledgeable practitioner would consider important context, even though the client didn't specifically request it?"
How to include it: Value-add sections sit after the core arguments but before the counter-arguments. Label them to signal supplementary value (e.g., "Practical Considerations," "Implementation Context," "Additional Capabilities"). Each section earns its place by being directly useful, not by being comprehensive.
What still goes to the appendix: Findings that are intellectually interesting but not practically useful. The distinction is client utility.
For each argument, organize the supporting evidence:
Insight-before-data rule: Each argument/section must open with the KEY CONCEPTUAL INSIGHT — the "why" that explains the data pattern — before presenting structured data (tables, benchmarks, scenarios).
When an expert provided a conceptual frame for the topic, that frame should be the section opener: a 1-2 sentence statement of the principle, with attribution, that tells the reader WHY the data looks the way it does. The structured data then follows as evidence supporting the frame.
This applies regardless of the domain:
A section that opens with data tells the reader WHAT. A section that opens with the conceptual insight tells them WHY — which is what they need to make decisions. Data without a frame is a spreadsheet; data with a frame is an argument.
Lead with the strongest, most credible data point
Layer in corroborating evidence from different sources
Acknowledge limitations or caveats honestly (this builds credibility)
Include the "compared to what?" benchmark for every number
For every key data point, enforce the "Data → So What → Now What" pattern:
Use source-type-aware confidence language: state expert-confirmed data directly; hedge public research with "according to [source]"; explicitly label consultant-constructed assumptions as "for illustrative purposes"
Expert-anchor principle for quantitative claims: When an expert provides a conditional quantitative claim (a specific number with stated conditions), use the expert's figure as the HEADLINE ANCHOR. Present plugin-generated scenarios, sensitivity ranges, or modeled estimates as CONTEXT AROUND the expert's number — not as a replacement for it.
Structure: "[Expert's number] is achievable when [expert's stated conditions] [Expert, CS-2]. Sensitivity analysis: under [alternative assumptions], the range extends to [modeled range]."
The expert's conditional number is a CS-2 data point from a practitioner with direct experience. The plugin's scenario model is constructed from assumptions that may not themselves be sourced. A constructed number should never displace a sourced number in the headline — it should contextualize it.
This applies to any quantitative claim where both an expert figure and a plugin-modeled figure exist: payback periods, market sizes, growth rates, cost estimates, adoption timelines, margin targets, etc.
When the expert's conditions are unlikely to be met in the client's specific context, the report should say so explicitly — "Expert estimates [X] under [conditions]; the client's situation differs in [specific ways], suggesting [adjusted range]" — rather than silently substituting the adjusted range as the headline.
Source material cross-reference: When a finding supports or challenges content from a client-provided document, cite the structural coordinate from the Source Material Extraction Log (e.g., "this contradicts the positioning on DFI Slide 16" or "this validates the approach shown in Kroger deck, p20-21"). The deliverable should make visible where the analysis confirms, extends, or challenges what the client already has. A client who provided a 30-slide deck expects the analysis to engage with specific slides, not just the themes.
The storyline must pre-empt objections from THREE sources:
For each counter-argument, show that:
A storyline that addresses only the most obvious objection while ignoring client-raised concerns or operational risks will lose credibility.
Write the sequence of headlines that would appear on each page of a client document. Read them in order — they should form a coherent, logical argument without any supporting text.
Test the headline sequence:
For each headline, note which specific research findings support it. This creates traceability from the final storyline back to the evidence base, and reveals any headlines that lack adequate support.
After mapping all evidence, review the storyline as an integrated argument:
Review the storyline for any illustrative calculations, projections, or scenario models. For each:
## Governing Message
[One sentence: the answer to the client question]
## Storyline Structure
[Brief explanation of why this structure fits this question]
## The Argument
### [Headline 1]
Supporting evidence: [specific findings with sources]
Key data point: [the anchor number or fact]
### [Headline 2]
Supporting evidence: [specific findings with sources]
Key data point: [the anchor number or fact]
### [Headline 3]
Supporting evidence: [specific findings with sources]
Key data point: [the anchor number or fact]
### Addressing the Counter-Argument
[The objection, why the evidence still supports the recommendation, and the mitigation plan]
### Value-Add Sections (if any)
[Insights from the research base that go beyond the Precision Anchor but materially elevate the deliverable. For each: the insight, why it's valuable, and supporting evidence.]
## Headline Sequence (read-through test)
1. [Headline 1]
2. [Headline 2]
3. [Headline 3]
4. ...
## Precision Anchor Alignment
[Copy the Precision Anchor here. Then state: "The governing message answers the Precision Anchor question [DIRECTLY / PARTIALLY / WITH QUALIFICATION]." If partially or with qualification, explain the gap and what would be needed to close it.]
## Evidence Map
| Headline | Key Evidence | Source | CS Score |
|----------|-------------|--------|----------|
## Appendix Candidates
[Findings that passed the Value-Add Review without qualifying — available if the client asks]
Present the synthesis to the user. This is the blueprint that the client-report skill will turn into a polished deliverable.