Produces concise, decision-ready executive summaries from report data. Activates when the user wants a TLDR, management summary, key takeaways, or top-level overview — including 'give me the highlights' or 'what are the main findings?' Covers metric highlighting, recommendation prioritization, and risk/opportunity framing.
From founder-osnpx claudepluginhub thecloudtips/founder-os --plugin founder-osThis skill uses the workspace's default tool permissions.
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Enables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Produce a concise, self-contained executive summary for every generated report. Target the summary at busy decision-makers who may read only this section and never open the full report body. Distill the entire report into a maximum of 400 words that answer three questions: what was analyzed, what was found, and what to do about it. Every claim must be backed by a specific number or metric drawn from the report body. Never include a finding in the summary that does not appear in the full report.
Follow this five-part structure for every executive summary. Maintain the order strictly -- readers expect a predictable flow from context through action.
Open with what was analyzed and why. State the report's scope, time period, and triggering event. Avoid vague openers like "This report covers various metrics." Always name the specific subject, time window, and motivation.
Example: "This report analyzes Q3 2025 sales performance across all four regional teams, triggered by the 15% revenue shortfall flagged in the September flash report."
Present the most important discoveries ranked by business impact, placing the single most important finding first. Each bullet must contain at least one specific number or metric. Use parallel grammatical structure across all bullets.
Format each bullet as: [Topic]: [Finding with metric] ([comparison context])
Example:
Limit to 5 bullets maximum. When the report contains more findings, select the 5 with the greatest business impact.
Answer the "so what?" question. Explain what the findings mean in aggregate -- the narrative connecting data points into a coherent story. State the implication plainly without hedging. Avoid restating findings; synthesize across them to surface the underlying pattern or root cause.
Example: "The combination of rising churn and strong acquisition suggests the product attracts new users but fails to retain them past the 90-day mark, pointing to an onboarding gap."
Provide concrete, prioritized actions flowing logically from findings and insights. Each recommendation must include: what to do, expected impact, and suggested timeline. Use the prioritization framework defined below.
Format: [Priority level] - [Action]: [Expected impact]. Timeline: [Suggested timeframe].
Example:
Include only when immediate actions are required before the next reporting cycle. List 1-3 time-sensitive items with owners and deadlines. Omit entirely if all actions are captured in Recommendations.
Enforce a hard cap of 400 words for the entire executive summary. When the draft exceeds 400 words, cut Key Findings first (reduce to 3 bullets), then trim Recommendations (reduce to 3), then tighten the Situation Statement. Never cut Critical Insights.
Place the single most important finding first under Key Findings. Select the one with the largest financial impact or the one most likely to change a decision.
Every Key Findings bullet must contain a specific number, percentage, dollar amount, or measurable metric. Never write qualitative-only bullets like "Sales performance was mixed." Rewrite as "Sales declined 7% to $2.8M."
Follow "Situation, Finding, Implication, Recommendation" (SFIR) flow throughout. Each section builds on the previous one. Findings connect to Situation, Insights reference Findings, Recommendations address Insights.
Wrap key numbers in bold: "Revenue grew 23% to $4.2M." Bold both percentage change and absolute value when both appear.
Always show direction of change with explicit language -- "up", "down", "grew", "declined" -- rather than relying on signs alone.
Never present a metric in isolation. Always provide at least one comparison:
Lead with the most relevant comparison; include a secondary in parentheses when available.
Place first. Leverages existing tools, requires no additional headcount, delivers results within one reporting cycle. Label "P1." Timeline: 1-4 weeks. Include expected impact estimate.
Place second. Requires cross-team collaboration, may need budget approval, involves process or system changes. Label "P2." Timeline: 4-12 weeks.
Place third. Incremental improvements worth capturing but not urgent. Batch together when possible. Label "P3."
Do not include as formal recommendations. Mention only when the report data specifically highlights them. State explicitly: "Not recommended at this time due to unfavorable effort-to-impact ratio."
Present negative findings as risks paired with mitigation strategies. Structure as: "[Finding] creates a risk of [consequence]. Mitigate by [action]."
Example: "The 3.4% churn rate creates a risk of $1.2M annual revenue loss. Mitigate by launching retention campaigns for 60-90 day accounts."
Present positive findings as opportunities paired with capture strategies. Structure as: "[Finding] creates an opportunity to [outcome]. Capture by [action]."
Apply traffic light status to each major finding for at-a-glance assessment:
Apply inline: "Churn: 3.4% [RED] -- above 2.5% threshold for 2 consecutive months."
Run every executive summary through this checklist before finalizing. Every item must pass.
When findings point in opposite directions (e.g., revenue growing but margins shrinking), do not paper over the contradiction. Present both explicitly and dedicate Critical Insights to explaining the tension: "Revenue grew 18% but gross margin declined 4 points, suggesting growth is being purchased through discounting." Never average contradictory metrics into a single "neutral" assessment.
When data is insufficient for confident recommendations, state so explicitly. Downgrade affected recommendations to P3 with qualifier: "Preliminary recommendation pending additional data." Include a data-gathering action as a P1: "Collect 2 additional months of cohort data before committing to retention program design."
When the report contains no quantitative data, adapt Key Findings to use evidence-count bullets: "4 of 6 stakeholders cited onboarding friction as the primary pain point." Quantify qualitative data wherever possible -- convert themes to frequency counts, rank by mention count. Note in the Situation Statement that findings represent themes rather than statistical measurements.
When the report focuses on a single KPI, collapse Key Findings into a breakdown of that metric: present 3-5 facets (time periods, segments, cohorts, contributing factors) instead of separate findings. Maintain the full Recommendations structure -- multiple actions always influence a single outcome.