From strategy-consultant
Run a full strategy consulting analytical engagement from problem definition through to a client-ready report. Use when someone asks to "run a full analysis", "do a consulting engagement on", "analyze this end to end", "help me with a client engagement", "do the full workflow", or presents a business problem that needs the complete Define → Research → Sense-Check → Synthesize → Deliver treatment. This is the orchestrator skill — it coordinates all other skills in the plugin to produce a comprehensive deliverable.
npx claudepluginhub chipalexandru/strategy-consultantThis skill uses the workspace's default tool permissions.
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Calculates TAM/SAM/SOM using top-down, bottom-up, and value theory methodologies for market sizing, revenue estimation, and startup validation.
Orchestrate a complete consulting-grade analytical engagement. This skill coordinates the other skills in sequence, producing an executive-grade client report from a starting business problem.
Delivery only. Produce results directly. Flag weak logic inline rather than asking Socratic questions. The user wants outputs, not coaching.
Invoke the problem-definition skill. Work with the user to sharpen their business question into a decision-oriented problem statement with clear scope and boundaries.
Do not proceed until the user confirms the problem statement.
Assess whether the problem benefits from a formal hypothesis tree:
If used, present the hypothesis tree to the user and get confirmation on research priorities before proceeding.
Before launching research, ask the user about available data sources using the AskUserQuestion tool. This step determines the research tier mix and shapes the entire research strategy.
The following three questions are REQUIRED and must each appear as a separate, clearly worded question in the AskUserQuestion call. You may add additional context-specific questions (e.g., geography focus, deliverable format), but these three must not be omitted, merged, or rephrased beyond recognition. The AskUserQuestion tool accepts a maximum of 4 questions per call — if you need more than one additional question beyond the three required, use a second call.
Internal/client data (REQUIRED): Based on the problem statement, suggest 2-3 specific types of internal data that would be most valuable for THIS question and ask whether they are available. Examples: internal strategy documents, store layouts, financial models, operational metrics, customer data. Tailor the examples to the specific engagement — do not use generic placeholders.
Expert interviews (REQUIRED): Ask explicitly: "Do you have access to expert interviews — either existing transcripts/notes from industry practitioners, or planned interviews we should prepare guides for?" This question must be asked separately from the internal data question because expert data follows a different processing path (Phase 3.5, expert-interview skill). Do NOT merge this with the internal data question.
Other external reference material (REQUIRED): Ask: "Do you have any other reference material that should inform this research — for example, an industry report, market study, competitor analysis, or third-party dataset?" This captures data sources that are neither internal client data nor expert interviews but still provide valuable context beyond what public web research can find.
Why these three questions matter: Each question covers a distinct data tier that follows a different processing path. Internal data (Q1) is analyzed alongside public research in Phase 3. Expert interviews (Q2) activate Phase 3.5 and the expert-interview skill for claim extraction and CS scoring. Other reference material (Q3) feeds into the research brief as supplementary context for the analysts. Together, the answers determine whether the engagement runs as Scenario A (public only), B (internal data only), C (internal + public), D (public + expert), or E (all sources).
Record all answers and carry them into the research brief.
After the Data Source Inquiry, use the AskUserQuestion tool to confirm the deliverable format. This question must be asked BEFORE research begins so the synthesis and delivery phases know what to produce.
Important: The format choice does NOT affect research quality or depth. Research is always driven by the Precision Anchor and Coverage Dimensions — the agents investigate the question thoroughly regardless of output format. The format choice only determines how findings are presented in the final deliverable.
Question: "The default deliverable is a concise executive Word document (4-6 pages, bullet-driven, with sourced recommendations and Research Notes). Would you prefer a different format?"
Options:
If the user requests a format not listed above, confirm that you can produce it and adapt the delivery phase accordingly.
Carry the format choice forward to the synthesis phase and the client-report/delivery phase. The Deliverable Blueprint from problem-definition should be updated to reflect the confirmed format.
Include the Deliverable Blueprint from the Precision Anchor in the brief/input for this phase.
Invoke the research skill. This dispatches three agents:
The research brief must include the data availability summary from Phase 2.5. If internal data was provided, analyze it alongside public research. If the user indicated expert interviews are planned, identify the key uncertainties where expert input would be most valuable.
After the validator completes, the research skill will present an executive summary and a Deep Research assessment identifying under-explored sub-dimensions. The user can choose to dispatch a third "deep dive" agent targeting those specific gaps, or proceed with the existing research base.
If the user indicated during the Data Source Inquiry that expert interviews would be available, invoke the expert-interview skill. This phase comes AFTER public research because:
Sub-steps:
If expert interviews reveal significant conflicts with public research or surface entirely new information, consider a targeted follow-up research pass before proceeding to sense-check.
Invoke the sense-check skill. This is NOT optional and cannot be replaced with an informal assessment. The sense-check skill prescribes a 7-step process that must produce a written Sense-Check Report containing:
A DRIFTED verdict is a hard stop — loop back to Phase 3 for targeted research. A PARTIAL verdict must be carried forward into the synthesis as an explicit qualification.
Present the sense-check report to the user (Checkpoint 3).
COMMON FAILURE MODE: The agent reads the sense-check skill but decides the research is "strong enough" and skips producing the actual report. This defeats the purpose. The discipline of writing each section forces rigor that mental shortcuts do not provide. Always produce the written report.
Include the Deliverable Blueprint from the Precision Anchor in the brief/input for this phase.
Invoke the synthesis skill. This step transforms the evidence into an argument. It is NOT the same as organizing findings by topic — it answers the question "so what should the client do?"
The synthesis skill must produce a written Storyline Document containing:
Present the storyline to the user (Checkpoint 4).
COMMON FAILURE MODE: The agent skips synthesis and goes directly from research to the report, organizing findings by topic rather than by argument. The result is a summary, not a synthesis. The test: if you remove all the evidence and read only the headlines, does a decision-oriented argument emerge? If not, the synthesis step was skipped.
Include the Deliverable Blueprint from the Precision Anchor in the brief/input for this phase, along with the confirmed deliverable format from Phase 2.7.
Format-specific delivery:
Executive brief or Comprehensive report (.docx): Invoke the client-report skill. Do NOT write the document directly using the docx skill alone — the client-report skill contains writing standards, banned words, structure requirements, and a mandatory quality review process. For executive briefs, enforce a strict 4-6 page limit (see client-report skill, Executive Brief Mode).
Excel (.xlsx): Use the xlsx skill. Let the Deliverable Blueprint and the user's stated needs guide the structure. Do not over-engineer — apply the default xlsx skill and ask the user for any structural preferences (e.g., how to organize rows/columns) if not already clear from the Deliverable Blueprint.
PowerPoint (.pptx): Use the pptx skill. Let the synthesis storyline guide the slide structure. Do not over-engineer — apply the default pptx skill and ask the user for any structural preferences if not already clear.
Research Notes / source traceability requirement: Regardless of format, source traceability must be included — adapted to the format. For Word documents: the Research Notes section as specified in the client-report skill. For Excel: a Sources sheet listing every reference cited in the matrix cells. For PowerPoint: a Sources appendix slide. The traceability requirement is non-negotiable; the specific format adapts.
The report MUST include:
The client-report quality review MUST include all mandatory checks (5a through 5l), including:
COMMON FAILURE MODE: The agent bypasses the client-report skill and writes the document directly, producing a well-formatted document that lacks the Research Notes section, the counter-argument section, and the quality review. The client-report skill is not just about formatting — it enforces analytical rigor in the final deliverable.
The agent that wrote the report cannot objectively audit it. Dispatch a separate validation agent to read the generated .docx alongside the upstream artifacts and produce a gap report.
Why this exists: The client-report skill specifies mandatory quality checks (5a–5k), but when the same agent that wrote the report also runs the checks, it has sunk-cost bias toward what it already produced. The research phase already solves this problem — two analysts write, a separate validator checks. This phase extends that pattern to delivery.
Process:
Dispatch the research-validator agent (or a general-purpose agent) with the following inputs:
research-validated.md) — specifically the Research Notes / Source Registry sectionThe validator agent must check:
[N] in the research-validated file's Source Registry section. Count entries matching [N] in the extracted .docx text. Report both numbers explicitly: "Source Registry: {X} entries. Report Research Notes: {Y} entries." If X ≠ Y, the report FAILS — do not evaluate other checks. The most common failure is a prose summary replacing the numbered list; if the .docx Research Notes section contains zero [N] entries, flag immediately.The validator produces a Deliverable Gap Report:
If FAIL: fix the gaps in the document before presenting to the user. Then re-run the validator to confirm.
COMMON FAILURE MODE: The agent decides the report "looks good enough" and skips this phase. This is the phase that would have caught the most common delivery failure — omitted Research Notes, missing sources, dropped client questions. It adds one agent call. Do not skip it.
Present the final validated document to the user.
The following checkpoints require presenting output to the user and waiting for their response before proceeding. These are not optional efficiency trade-offs — they are quality gates that catch errors, incorporate user knowledge, and prevent wasted work.
CHECKPOINT 1 — Problem Statement (Phase 1) Present the problem statement, Precision Anchor (including the Deliverable Blueprint), AND the Client Question Checklist. Wait for user confirmation. DO NOT proceed to research until the user confirms.
CHECKPOINT 2 — Research Package (Phase 3) After the validator completes, present the validated research package to the user. Include: key findings summary, evidence strength assessment, and information gaps. DO NOT proceed to sense-check until the user responds.
CHECKPOINT 3 — Sense-Check Report (Phase 4) Produce the full sense-check report (claim inventory, triangulation, steel-man, math checks, precision verdict). Present to the user. DO NOT proceed to synthesis until the user has seen the sense-check results.
CHECKPOINT 4 — Storyline (Phase 5) Present the synthesis output (governing message, headline sequence, evidence map) to the user. Ask: "Does this storyline answer your question? Any adjustments before I write the report?" DO NOT proceed to the report until the user approves the storyline.
Be explicit about evidence quality. Never present a weak finding as a strong one. Use confidence levels and ranges.
Be explicit about information boundaries. Clearly flag what public research can answer vs. what requires client data or expert interviews. Recommend specific data requests and expert profiles where gaps exist.
Let the user drive pace. Present output at each phase and wait for confirmation before proceeding. The user may want to inject additional context, redirect the analysis, or skip phases.
Never pad. Every sentence in every output must earn its place against the Deliverable Blueprint. What counts as earning its place depends on what the client needs: driving a recommendation, filling a coverage dimension, providing a reference example the client will return to.
problem-definition → [hypothesis-tree] → data-source-inquiry → deliverable-format → research → [deep-research] → [expert-interview] → sense-check → synthesis → deliver → deliverable-validation
(optional) (AskUserQuestion) (AskUserQuestion) (3 agents) (user-initiated) (if interviews) (format-aware) (separate agent)