32-namespace AI automation ecosystem for SMB founders. Email triage, meeting prep, report generation, CRM sync, and 28 more tools — all powered by Claude Code.
npx claudepluginhub thecloudtips/founder-os --plugin founder-osExtract action items from a file and create Notion tasks
Extract action items from pasted text and create Notion tasks
Generate a structured daily briefing from calendar, email, tasks, and Slack data
Review and update today's daily briefing with new information received since it was generated
Generate a concise 1-page client brief for meeting preparation
Load complete client context from all connected sources into a unified dossier
Research multiple competitors and build a structured comparison matrix
Research a competitor via web search and produce a structured competitive intelligence report
Analyze a contract to extract key terms, detect risks, and produce a structured report
Analyze a contract and compare its terms against standard freelancer/agency benchmarks
Load CRM context for a client - company profile, contacts, recent activities, and deals
Sync email threads to CRM Pro Communications database with client matching and AI summaries
Sync calendar meetings to CRM Pro Communications database with client matching and AI summaries
Answer questions from Google Drive documents with citations
Suggest folder structure and organizational improvements (recommend-only)
Search Google Drive for files with preview snippets
Generate summary of a Google Drive document
Generate a comprehensive expense report aggregating P11 Invoice Processor data and local receipts for any date range
Quick expense summary for a date range — totals, category breakdown, top vendors, tax deductible amount
Scan Gmail sent folder for emails awaiting response and track follow-ups
Draft a follow-up nudge email and create a Gmail draft
Create Google Calendar reminders for pending follow-ups
Quick status check for a single goal or all goals (read-only, no Notion writes)
Close or archive a goal with completion summary
Create a new goal with optional milestones and deadline
Generate a goal progress dashboard with RAG status, blockers, and Gantt timeline
Update goal progress, complete milestones, add notes, or change status
Generate a detailed health report for a single client with metric breakdown and recommended actions
Scan all CRM clients and compute health scores with RAG dashboard
Create Gmail drafts from approved Notion entries
Triage your inbox with AI categorization and prioritization
Promote a learned pattern to permanently approved status
View and modify Adaptive Intelligence Engine configuration
View self-healing event log, error frequency analysis, and fallback acceptance rates
View and explore learned patterns from the Adaptive Intelligence Engine
Clear learned patterns for a specific plugin or all plugins
Dashboard view of the Adaptive Intelligence Engine — shows hooks activity, learned patterns, self-healing events, and workflow chains
Process all invoice files in a folder, extracting data and generating a batch summary table. Supports two modes: fast single-agent batch extraction (default) or full 5-agent pipeline with Notion recording for every invoice (--team flag).
Process a single invoice file, extracting vendor, amount, date, and line items. Supports two modes: fast single-agent extraction (default) or full 5-agent pipeline with Notion recording (--team flag).
Search the knowledge base and answer a question with sourced citations and confidence rating
Browse the knowledge base to find documents matching a topic with ranked previews
Discover and catalog all knowledge sources from Notion and Google Drive into a searchable index with content classification, freshness tracking, and keyword extraction
Capture a learning insight with auto-tagging and related insights
Search and browse past learnings by topic, keyword, or date
Generate a weekly learning synthesis with themes, connections, and streak tracking
Extract key points from a document or pasted text and generate a LinkedIn post
Generate a LinkedIn post from a topic with framework selection, audience targeting, and founder voice
Generate multiple variations of a LinkedIn post with different hooks, frameworks, and tones
Analyze a meeting transcript to extract summaries, decisions, follow-ups, and topics
Gather meeting transcripts from multiple sources and extract intelligence (summaries, decisions, follow-ups, topics)
Delete a memory and revert any behavior it triggered
View what the system has learned — memories, patterns, and active adaptations
Force sync between local memory store and Notion [FOS] Memory database
Explicitly teach the system a fact, preference, or rule
Quick morning check-in showing top priorities, today's schedule, and urgent counts across all sources
Full morning sync — gather from all sources, synthesize priorities, save to Notion, and display chat summary
Write full newsletter draft from outline in founder voice
Full pipeline — research, outline, and draft a newsletter on any topic
Create newsletter structure from research findings
Deep research across web, GitHub, Reddit, Quora, and official blogs/changelogs for newsletter topics
Create a Notion page or database from a natural language description
Search Notion and query databases using natural language questions
Deploy a pre-built Notion database template or list available templates
Update a Notion page's properties or append content using natural language
Generate a deep meeting prep document with attendee context, open items, and framework-based talking points for a specific calendar event
Generate meeting prep documents for all of today's calendar meetings in sequence
Add a new prompt to the library with quality checks
Retrieve a prompt with interactive variable substitution
List and search prompts from your team library
Improve an existing prompt using AI optimization
Share a prompt with your team
Generate a professional client proposal with 7 sections and 3 pricing packages
Generate a proposal from an existing brief file or Notion page
Generate a report from a predefined template with structured sections
Generate a report from data sources with AI analysis and formatting
Generate a structured weekly review from tasks, meetings, and emails
Configure hourly rate and custom time estimates for savings calculations
Generate a multi-period ROI report showing time savings trends and annualized projections across months
Quick view of time savings across all active Founder OS plugins for a recent period
Generate a weekly time savings report showing hours and dollars saved by Founder OS plugins
Create all Founder OS HQ Notion databases programmatically.
Run health checks on the Founder OS installation.
Quick personal Slack catch-up showing only your @mentions and action items
Scan Slack channels and produce a structured digest with decisions, action items, key threads, and @mentions
Load a pre-written project brief from a local file or Notion page, then generate a Statement of Work with 3 scope options (Conservative, Balanced, Ambitious). This is the "bring your own brief" mode — it skips the interactive discovery interview used by `/founder-os:sow:generate`.
Generate a client-ready Statement of Work with three named scope packages. Operate in one of two modes depending on arguments.
Generate a Mermaid flowchart from a workflow description or existing SOP
Transform a workflow description into a structured 7-section SOP with Mermaid diagram
Create a new workflow YAML file from a template or interactive builder
Modify an existing workflow's steps, schedule, or configuration
List available workflow files and their metadata
Execute a YAML-defined workflow by running all steps in dependency order
Set up persistent scheduling for a workflow using session or OS-level cron
View execution history and status of workflow runs
Use this agent as the lead in the Daily Briefing Generator parallel-gathering pipeline. It merges all gatherer outputs into a structured daily briefing, creates a Notion page, and records the briefing in the tracking database. <example> Context: All gatherer agents have completed and their results are collected for synthesis. user: "/daily:briefing --team" assistant: "All data sources gathered. Briefing lead is now assembling today's daily briefing..." <commentary> The briefing-lead runs after all gatherers complete. It receives their combined outputs and builds the 5-section briefing page in Notion. </commentary> </example> <example> Context: Some gatherers failed but minimum threshold met (2 of 4). user: "/daily:briefing --team" assistant: "Calendar and Gmail data gathered. Slack and Notion timed out. Briefing lead assembling partial briefing..." <commentary> The briefing-lead handles partial data gracefully. It marks unavailable sections and proceeds with available data. </commentary> </example>
Use this agent as a gatherer in the Daily Briefing Generator parallel-gathering pipeline. It retrieves today's calendar events and generates meeting preparation notes with attendee context, open items, and prep recommendations. <example> Context: The /briefing:generate command dispatches all gatherer agents simultaneously to build the daily briefing. user: "/briefing:generate" assistant: "Generating daily briefing. Calendar agent is fetching today's meetings and preparing context notes..." <commentary> The calendar-agent is a required gatherer. It pulls today's events from Google Calendar, classifies each meeting, scores importance, and enriches with attendee context from Gmail and Notion when available. </commentary> </example> <example> Context: User requests team mode briefing with full parallel pipeline. user: "/briefing:generate --team" assistant: "Launching parallel gathering pipeline. Calendar agent gathering today's schedule in parallel with other agents..." <commentary> In team mode, calendar-agent runs in parallel with gmail-agent, notion-agent, and slack-agent. Its structured JSON output feeds into the briefing-lead for synthesis into the final daily briefing. </commentary> </example>
Use this agent as a gatherer in the Daily Briefing Generator parallel-gathering pipeline. It scans unread emails, prioritizes them using the urgent/important matrix, and extracts highlights for the daily briefing. <example> Context: The /briefing:generate command dispatches all gatherer agents simultaneously to build the daily briefing. user: "/briefing:generate" assistant: "Generating daily briefing. Gmail agent is scanning unread emails and identifying priorities..." <commentary> The gmail-agent is a required gatherer. It applies the email-prioritization skill to score and extract email highlights using the urgent/important matrix. </commentary> </example> <example> Context: User requests briefing with custom lookback window. user: "/briefing:generate --hours=24" assistant: "Gmail agent scanning emails from the last 24 hours..." <commentary> The --hours flag controls how far back the gmail-agent looks for unread emails. Default is 12 hours. </commentary> </example>
Use this agent as a gatherer in the Daily Briefing Generator parallel-gathering pipeline. It pulls tasks due today and overdue items from Notion databases, prioritizes them, and groups them by project. <example> Context: The /briefing:generate command dispatches all gatherer agents simultaneously to build the daily briefing. user: "/briefing:generate" assistant: "Generating daily briefing. Notion agent is pulling today's tasks and checking for overdue items..." <commentary> The notion-agent is a required gatherer. It uses the task-curation skill to find, filter, and organize tasks from Notion databases, providing the task workload section of the daily briefing. </commentary> </example> <example> Context: User generates a briefing and has multiple Notion task databases across projects. user: "/briefing:generate" assistant: "Generating daily briefing. Notion agent is discovering task databases and filtering by due date..." <commentary> The notion-agent uses dynamic database discovery to search across all Notion task databases the user has. It never hardcodes database IDs, so it works with any Notion workspace structure. </commentary> </example> <example> Context: User generates a briefing but Notion gws CLI is unavailable or authentication not configured. user: "/briefing:generate" assistant: "Generating daily briefing. Notion agent reports Notion is not configured -- proceeding with other sources..." <commentary> When Notion MCP is unavailable the notion-agent returns status: unavailable so the briefing-lead can note the gap without blocking the pipeline. </commentary> </example>
Use this agent as an optional gatherer in the Daily Briefing Generator parallel-gathering pipeline. It fetches overnight Slack mentions, DMs, and channel highlights for the daily briefing. Activated only when Slack MCP server is configured. <example> Context: The /briefing:generate command dispatches all gatherer agents simultaneously to build the daily briefing. user: "/briefing:generate" assistant: "Generating daily briefing. Slack agent is scanning overnight mentions and DMs..." <commentary> The slack-agent runs in parallel with calendar, gmail, and notion gatherers. It fetches Slack activity from the last 12 hours. If Slack gws CLI is unavailable or authentication not configured, it returns status: unavailable and the pipeline continues without Slack data. </commentary> </example> <example> Context: User generates a briefing but does not have Slack MCP configured. user: "/briefing:generate" assistant: "Generating daily briefing. Slack agent reports Slack is not configured -- proceeding with other sources..." <commentary> The slack-agent is optional (marked in teams/config.json). It degrades gracefully, returning an unavailable status so the briefing-lead can note the gap without blocking the pipeline. </commentary> </example>
Use this agent as a gatherer in the Client Context Loader parallel-gathering pipeline. It retrieves meeting history and upcoming meetings with the client from Google Calendar. <example> Context: The /client:load command dispatches all gatherer agents to build a client dossier. user: "/client:load Acme Corp" assistant: "Loading client context. Calendar agent is searching for past and upcoming meetings with Acme Corp..." <commentary> The calendar-agent is an optional gatherer. If the gws CLI is not available, it returns status: unavailable and the pipeline continues without calendar data. </commentary> </example>
Use this agent as the lead in the Client Context Loader parallel-gathering pipeline. It merges all gatherer outputs into a unified client dossier, caches results in Notion, and writes enrichments back to CRM Pro databases. <example> Context: All gatherer agents have completed (or timed out) and their results are collected for synthesis. user: "/client:load Acme Corp" assistant: "All data sources gathered. Context lead is now synthesizing the unified client dossier for Acme Corp..." <commentary> The context-lead runs after all gatherers complete. It receives their combined outputs and builds the seven-section dossier, caches it, and writes enrichment data back to CRM. </commentary> </example> <example> Context: User requests a brief and the cached dossier is still fresh (within 24h TTL). user: "/client:brief Acme Corp" assistant: "Found a cached dossier for Acme Corp (generated 3 hours ago). Generating executive brief from cached data..." <commentary> The context-lead checks the cache first. If a fresh dossier exists (within TTL), it skips re-gathering and generates the brief directly. </commentary> </example>
Use this agent as a gatherer in the Client Context Loader parallel-gathering pipeline. It pulls structured client data from the Notion CRM Pro databases (Companies, Contacts, Deals, Communications). <example> Context: The /client:load command has been triggered with a client name, and the parallel gathering pipeline is dispatching all gatherer agents simultaneously. user: "/client:load Acme Corp" assistant: "Loading client context for Acme Corp. Dispatching CRM agent to pull Notion CRM data..." <commentary> The crm-agent is always dispatched as part of the parallel gathering group. It searches CRM Pro databases for the client record and follows relations to contacts, deals, and communications. </commentary> </example> <example> Context: User wants a client brief and the pipeline needs CRM data to build the profile section. user: "/client:brief Acme Corp" assistant: "Generating client brief. CRM agent is pulling company profile and deal status from Notion..." <commentary> The crm-agent provides the foundational profile data that the context-lead uses to build the dossier's Profile section. </commentary> </example>
Use this agent as a gatherer in the Client Context Loader parallel-gathering pipeline. It finds and catalogs client-related documents in Google Drive. <example> Context: The /client:load command dispatches all gatherer agents to build a client dossier. user: "/client:load Acme Corp" assistant: "Loading client context. Docs agent is searching Google Drive for documents related to Acme Corp..." <commentary> The docs-agent is an optional gatherer. If the gws CLI is not available, it returns status: unavailable and the pipeline continues without document data. </commentary> </example>
Use this agent as a gatherer in the Client Context Loader parallel-gathering pipeline. It gathers email communication history with the client from Gmail. <example> Context: The /client:load command dispatches all gatherer agents simultaneously to build a client dossier. user: "/client:load Acme Corp" assistant: "Loading client context. Email agent is searching Gmail for communication history with Acme Corp..." <commentary> The email-agent runs in parallel with other gatherers. It searches Gmail for threads involving the client's contacts and calculates communication statistics. </commentary> </example> <example> Context: User wants a quick client brief and the pipeline needs email sentiment data. user: "/client:brief Acme Corp" assistant: "Building client brief. Email agent is analyzing recent email exchanges for sentiment and response patterns..." <commentary> Email data feeds into both the Recent Activity and Sentiment sections of the final dossier. </commentary> </example>
Use this agent as a gatherer in the Client Context Loader parallel-gathering pipeline. It pulls meeting notes, decisions, and open items from Notion pages and the CRM Communications database. <example> Context: The /client:load command dispatches all gatherer agents to build a client dossier. user: "/client:load Acme Corp" assistant: "Loading client context. Notes agent is searching Notion for meeting notes and decisions about Acme Corp..." <commentary> The notes-agent searches both free-form Notion pages and the structured Communications database to compile decisions and open items. </commentary> </example>
Use this agent as step 2 of 4 in the Inbox Zero pipeline, after triage-agent completes. Extracts action items from categorized emails and creates Notion tasks. <example> Context: Triage agent has finished categorizing emails, pipeline moves to action extraction user: "Pipeline step 2: extract action items from triaged emails" assistant: "Launching action-agent to extract tasks from action_required emails and create Notion entries." <commentary> Action agent receives triage output and processes action_required and needs_response emails. </commentary> </example> <example> Context: User triggered full pipeline, triage is complete with 12 action_required emails user: "/inbox:triage --team" assistant: "Triage complete. Action agent now extracting tasks from 12 action-required emails." <commentary> Automatically triggered as pipeline step 2 after triage completes. </commentary> </example>
Use this agent as step 4 of 4 (final step) in the Inbox Zero pipeline, after response-agent completes. Recommends emails for archiving and generates the pipeline report. Does NOT auto-archive. <example> Context: Response agent has finished drafting replies, pipeline moves to archive recommendations user: "Pipeline step 4: finalize and recommend archiving" assistant: "Launching archive-agent to label emails, recommend archive candidates, and generate the pipeline report." <commentary> Archive agent is the final pipeline step. It recommends archiving but does not execute it automatically. </commentary> </example> <example> Context: Full pipeline running, response drafting is complete user: "/inbox:triage --team" assistant: "Drafts saved to Notion. Archive agent now generating final report with archive recommendations." <commentary> Automatically triggered as the last pipeline step, producing the final user-facing report. </commentary> </example>
Use this agent as step 3 of 4 in the Inbox Zero pipeline, after action-agent completes. Drafts email responses and saves them to Notion for user review. <example> Context: Action agent has finished extracting tasks, pipeline moves to response drafting user: "Pipeline step 3: draft responses for emails needing replies" assistant: "Launching response-agent to draft replies for 10 emails flagged as needs_response." <commentary> Response agent receives action-enriched email data and drafts responses for needs_response emails. </commentary> </example> <example> Context: Full pipeline running, action extraction complete user: "/inbox:triage --team" assistant: "Actions extracted. Response agent now drafting replies for emails that need responses." <commentary> Automatically triggered as pipeline step 3, receiving the full dataset from action-agent. </commentary> </example>
Use this agent when the inbox triage pipeline is activated with --team mode, as step 1 of 4. This agent categorizes and prioritizes incoming emails. <example> Context: User runs /inbox:triage --team to process their inbox with the full pipeline user: "/inbox:triage --team --hours=24" assistant: "Starting the Inbox Zero pipeline. Launching triage-agent to categorize and prioritize your emails from the last 24 hours." <commentary> The triage agent is always the first step in the pipeline, triggered by --team flag on /inbox:triage. </commentary> </example> <example> Context: User wants a detailed inbox analysis with all pipeline agents user: "/inbox:triage --team --hours=48 --max=200" assistant: "Processing 48 hours of email. Triage agent will categorize up to 200 emails before passing to action extraction." <commentary> Triage agent handles the initial categorization regardless of time window or max email parameters. </commentary> </example>
Step 4 of 5 in the invoice processing pipeline. Evaluates categorized invoices for anomalies including duplicate invoice numbers, high amounts above policy thresholds, first-time vendors, date anomalies, and low categorization confidence. Creates approval requests in Notion for invoices requiring human review. Routes each invoice to auto_approved, needs_review, requires_approval, or rejected status.
Step 3 of 5 in the invoice processing pipeline. Assigns standard expense categories to each invoice line item and determines the primary expense category, tax deductibility, and budget code for the overall invoice. Uses vendor name, description, and amount signals. Called after validation to prepare categorized data for approval routing.
Step 1 of 5 in the invoice processing pipeline. Reads invoice files from the filesystem (PDF, JPG, JPEG, PNG, TIFF), applies OCR where needed, and extracts structured data including vendor details, invoice metadata, line items, and financial totals. Called when an invoice file needs to be parsed into machine-readable JSON.
Step 5 of 5 in the invoice processing pipeline. Records the fully processed invoice — including extraction data, validation results, expense categories, and approval status — in the Notion accounting database. Creates the invoice record for all statuses including rejected invoices (for audit trail). Returns the final per-item result to the batch aggregator.
Step 2 of 5 in the invoice processing pipeline. Verifies the mathematical correctness, date validity, and completeness of extracted invoice data. Runs checks on line item totals, subtotal/tax/total consistency, date logic, and required field presence. Auto-corrects minor rounding errors. Called after extraction to ensure data quality before categorization.
Use this agent as a gatherer in the Meeting Prep Autopilot parallel-gathering pipeline. It fetches a specific calendar event by event_id (or lists today's remaining events for selection), extracts full event details, classifies the meeting type, scores importance, and returns structured JSON for the prep-lead to synthesize into a comprehensive meeting prep dossier. <example> Context: The /prep command dispatches all gatherer agents simultaneously to build a meeting prep dossier for a specific event. user: "/prep --event=abc123xyz" assistant: "Preparing meeting context. Calendar agent is fetching event details and classifying the meeting..." <commentary> The calendar-agent is a required gatherer. It pulls the target event from Google Calendar by event_id, extracts attendee list with RSVP status, classifies the meeting type using the meeting-context skill framework, scores importance via weighted factors, and returns structured JSON for the prep-lead. </commentary> </example> <example> Context: User requests meeting prep without specifying an event_id. The calendar-agent lists today's remaining events for selection. user: "/prep" assistant: "No event specified. Calendar agent is listing today's remaining meetings for you to choose from..." <commentary> When no event_id is provided, the calendar-agent switches to discovery mode: it fetches all remaining events for today, applies filtering rules, and returns an array so the user (or prep-lead) can select which meeting to prep for. </commentary> </example> <example> Context: The /prep-today command dispatches gatherers to prep all remaining meetings for the day. user: "/prep-today" assistant: "Launching parallel gathering pipeline for all remaining meetings today. Calendar agent gathering full schedule..." <commentary> In prep-today mode, the calendar-agent returns an array of all remaining events (filtered per skill rules). The prep-lead iterates over each meeting, using the enrichment from gmail-agent and notion-agent to build individual prep dossiers. </commentary> </example>
Use this agent as an optional gatherer in the Meeting Prep Autopilot parallel-gathering pipeline. It searches Google Drive for documents relevant to the upcoming meeting -- proposals, shared decks, contracts, and past deliverables -- so the prep-lead can reference them in the meeting prep document. Activated only when the gws CLI is available with Drive access. <example> Context: The /meeting:prep command dispatches all gatherer agents simultaneously to build a meeting prep doc. user: "/meeting:prep 'Q2 Planning with Acme Corp'" assistant: "Preparing meeting brief. Drive agent is searching for relevant documents..." <commentary> The drive-agent runs in parallel with calendar, gmail, and notion gatherers. It searches Google Drive for files related to the meeting title and attendees via the gws CLI. If gws is not available, it returns status: unavailable and the pipeline continues without Drive data. </commentary> </example> <example> Context: User generates a meeting prep but does not have the gws CLI installed. user: "/meeting:prep 'Weekly sync with Bolt Industries'" assistant: "Preparing meeting brief. Drive agent reports Google Drive is not configured -- proceeding with other sources..." <commentary> The drive-agent is optional (marked in teams/config.json). It degrades gracefully, returning an unavailable status so the prep-lead can note the gap without blocking the pipeline. </commentary> </example>
Use this agent as a gatherer in the Meeting Prep Autopilot parallel-gathering pipeline. It searches Gmail for email history with each meeting attendee to surface communication context, unanswered threads, recent topics, and sentiment indicators for the prep-lead to synthesize into the meeting dossier. <example> Context: The /prep command dispatches all gatherer agents simultaneously. The gmail-agent receives the attendee list from calendar-agent output via shared context. user: "/prep 'Q1 Review with Acme Corp'" assistant: "Preparing meeting context. Gmail agent is searching email history for each attendee..." <commentary> The gmail-agent runs in parallel with notion-agent and drive-agent. It cross-references each attendee's email address against Gmail threads within the lookback window (default 90 days) and returns per-attendee email context. </commentary> </example> <example> Context: User requests meeting prep with a custom lookback window. user: "/prep 'Kickoff call' --hours=720" assistant: "Gmail agent scanning email history for attendees over the last 30 days..." <commentary> The --hours flag controls how far back the gmail-agent searches for email threads with attendees. Default is 2160 hours (90 days). </commentary> </example>
Use this agent as a gatherer in the Meeting Prep Autopilot parallel-gathering pipeline. It pulls CRM contact data, past meeting notes, and open action items from Notion databases to provide relationship context for each meeting attendee. <example> Context: The /prep command dispatches all gatherer agents simultaneously. The notion-agent searches CRM and meeting notes for each attendee. user: "/prep --event=abc123" assistant: "Preparing meeting context. Notion agent is looking up attendee CRM profiles and past meeting notes..." <commentary> The notion-agent is a required gatherer. It uses the meeting-context skill (Steps 3-4) to search CRM Contacts, retrieve relationship data, cross-reference past meeting notes, and compile open action items for each attendee. </commentary> </example> <example> Context: User preps for an external client meeting. The notion-agent finds CRM records and prior meeting history. user: "/prep --event=abc123" assistant: "Preparing meeting context. Notion agent found CRM profiles for 2 attendees, 4 past meeting notes, and 3 open action items..." <commentary> For external-client meetings the notion-agent provides maximum enrichment: full CRM profiles including deals, contact types, relationship status, and all prior meeting notes mentioning the attendees or their company. </commentary> </example> <example> Context: User generates meeting prep but Notion gws CLI is unavailable or authentication not configured. user: "/prep --event=abc123" assistant: "Preparing meeting context. Notion agent reports Notion is not configured -- proceeding with other sources..." <commentary> When Notion MCP is unavailable the notion-agent returns status: unavailable so the prep-lead can note the gap without blocking the pipeline. </commentary> </example>
Use this agent as the lead in the Meeting Prep Autopilot parallel-gathering pipeline. It merges all gatherer outputs into a comprehensive meeting prep document with framework-based talking points, creates a Notion page, records the prep in the tracking database, and returns the Notion page URL. <example> Context: All gatherer agents have completed and their results are collected for synthesis. user: "/meeting:prep abc123 --team" assistant: "All data sources gathered. Prep lead assembling meeting prep document with SPIN talking points for external client meeting..." <commentary> The prep-lead runs after all gatherers complete. It receives their combined outputs, classifies the meeting type, selects the appropriate talking-points framework (SPIN for external-client), compiles deduplicated open items, assembles the full prep document, and publishes it to Notion. </commentary> </example> <example> Context: Some gatherers failed but minimum threshold met (calendar-agent + gmail-agent succeeded, notion-agent and drive-agent failed). user: "/meeting:prep abc123 --team" assistant: "Calendar and Gmail data gathered. Notion and Drive unavailable. Prep lead assembling partial meeting prep document..." <commentary> The prep-lead handles partial data gracefully. It requires calendar-agent plus at least one of gmail-agent or notion-agent. Missing sources are noted in the output with actionable guidance, and the prep document proceeds with available data. </commentary> </example> <example> Context: Only calendar-agent succeeded. All other gatherers failed. user: "/meeting:prep abc123 --team" assistant: "Only calendar data available -- minimum threshold not met. Returning error with troubleshooting guidance." <commentary> The prep-lead enforces the minimum data threshold. Calendar-agent alone is insufficient because enrichment (email history or CRM context) is required for a useful prep document. The agent returns an error with specific troubleshooting steps. </commentary> </example>
Use this agent as step 2 of 5 in the Report Generator pipeline, after research-agent completes. Processes and analyzes gathered data to identify trends, patterns, and key findings. <example> Context: Research agent has finished extracting data from 3 sources with 450 total records, pipeline moves to analysis user: "Pipeline step 2: analyze the extracted datasets for trends and insights" assistant: "Launching analysis-agent to process 3 datasets (450 records). Running data quality assessment, descriptive statistics, trend detection, and comparative analysis." <commentary> Analysis agent receives the research-agent output JSON containing datasets with typed columns and normalized records. It validates data quality first, then runs the full analytical sequence before passing structured findings to the writing-agent. </commentary> </example> <example> Context: User triggered full report pipeline, research-agent extracted a single CSV with monthly revenue data spanning 24 months user: "/report:generate --team --source=revenue-2024.csv" assistant: "Research complete. Analysis agent now processing 24 rows of monthly revenue data -- computing statistics, detecting trends, and identifying outliers." <commentary> Automatically triggered as pipeline step 2 after research-agent completes. With time-series data present, the agent runs the full trend analysis suite including period-over-period comparisons, growth rates, and moving averages. </commentary> </example>
Use this agent as step 4 of 5 in the Report Generator pipeline, after writing-agent completes. Adds Mermaid charts, formats tables, and writes the final output file. <example> Context: Writing agent has finished generating report prose, pipeline moves to formatting and chart insertion user: "Pipeline step 4: format the report and add charts" assistant: "Launching formatting-agent to replace chart markers with Mermaid diagrams, format tables, and write the final report file." <commentary> The formatting agent receives the writing agent's report markdown and chart markers, then enhances the document with visualizations and polished formatting before writing the output file. </commentary> </example> <example> Context: Full pipeline running in --team mode, writing stage complete with 4 chart markers identified user: "/report:generate --team --spec='Q4 Revenue Summary'" assistant: "Report prose complete. Formatting agent now inserting 4 Mermaid charts, formatting tables, and writing the final output file." <commentary> Automatically triggered as pipeline step 4 after writing-agent completes. Receives the full dataset including original extracted data for chart generation and the report markdown with embedded chart markers. </commentary> </example>
Use this agent as step 5 of 5 (final step) in the Report Generator pipeline, after formatting-agent completes. Reviews the report for accuracy and consistency. RECOMMEND-ONLY — never auto-fixes or modifies the report. <example> Context: User runs /report:generate --team and the pipeline reaches QA as the final step user: "/report:generate --team --spec='Q4 Revenue Summary' --sources=revenue.csv,clients.json" assistant: "Pipeline step 5/5: Launching qa-agent to review the formatted report. Cross-referencing all numbers against source data, checking internal consistency, and validating executive summary quality. QA is recommend-only — flagging issues without modifying the report." <commentary> The QA agent is always the last step in the pipeline. It receives both the final formatted report and the original source data from the research agent so it can cross-reference every metric. It never modifies the report directly — it only flags issues and recommendations. </commentary> </example> <example> Context: Formatting agent has completed a report with Mermaid charts and the pipeline hands off to QA user: "/report:generate --team --spec='Client Health Report' --sources=metrics.csv --notion-db='CRM Pro' --template=executive-summary" assistant: "Final pipeline step: qa-agent reviewing the Client Health Report. Validating Mermaid chart syntax, verifying all metrics match source data, and checking executive summary against the QA checklist. Will log report metadata to Notion if available." <commentary> The QA agent checks Mermaid chart syntax, validates table formatting, and reviews the executive summary against the QA checklist from the executive-summary skill. If Notion MCP is configured, it logs report metadata to the "[FOS] Reports" database (or "Founder OS HQ - Reports", then "Report Generator - Reports") with Type="Business Report". </commentary> </example>
Use this agent as step 1 of 5 in the Report Generator pipeline when --team mode is activated. Gathers and extracts data from all specified sources. <example> Context: User runs /report:generate --team to build a report with the full pipeline user: "/report:generate --team --spec='Q4 Revenue Summary' --sources=revenue.csv,clients.json" assistant: "Starting the Report Generator pipeline. Launching research-agent to gather and extract data from revenue.csv and clients.json." <commentary> The research agent is always the first step in the pipeline, triggered by --team flag on /report:generate. It ingests all specified data sources before any analysis begins. </commentary> </example> <example> Context: User wants a report pulling data from Notion and local files with the full pipeline user: "/report:generate --team --spec='Client Health Report' --sources=metrics.csv --notion-db='CRM Pro'" assistant: "Launching research-agent to extract data from metrics.csv and query the CRM Pro Notion database. Will normalize all sources into a unified dataset for analysis." <commentary> The research agent handles both local file extraction and external sources (Notion MCP, Google Drive via gws CLI), degrading gracefully when optional sources are unavailable. </commentary> </example>
Use this agent as step 3 of 5 in the Report Generator pipeline, after analysis-agent completes. Generates report prose with executive summary from analytical findings. <example> Context: Analysis agent has finished processing data, pipeline moves to report writing user: "Pipeline step 3: write report prose from analytical findings" assistant: "Launching writing-agent to transform analysis results into polished report prose with executive summary." <commentary> Writing agent receives structured analysis output and generates a full Markdown report with narrative sections and chart placement markers. </commentary> </example> <example> Context: Full pipeline running with template, analysis complete with 5 key findings user: "/report:generate --team --template=quarterly-review" assistant: "Analysis complete. Writing agent now generating report prose following the quarterly-review template structure." <commentary> Automatically triggered as pipeline step 3 after analysis-agent completes, mapping findings to the provided template sections. </commentary> </example>
Phase 2 analysis agent in the SOW Generator competing-hypotheses pipeline. Receives all three scope proposals from Phase 1 and evaluates each on cost structure, margin health, budget fit, competitive pricing, and value-for-money. Called after all scope agents complete. Outputs pricing evaluation JSON to the SOW Lead.
Phase 2 analysis agent in the SOW Generator competing-hypotheses pipeline. Receives all three scope proposals from Phase 1 and evaluates each on delivery risk, financial risk, scope creep risk, dependency risk, and client relationship risk. Runs in parallel with pricing-agent. Called after all scope agents complete.
Phase 1 hypothesis proposer in the SOW Generator competing-hypotheses pipeline. Receives a project brief and independently proposes a conservative SOW scope that minimizes risk and maximizes delivery certainty. Runs in parallel with scope-agent-b and scope-agent-c. Called when a project brief is ready for scope interpretation.
Phase 1 hypothesis proposer in the SOW Generator competing-hypotheses pipeline. Receives a project brief and independently proposes a balanced SOW scope that optimizes value-to-effort ratio, including core requirements plus high-impact additions. Runs in parallel with scope-agent-a and scope-agent-c.
Phase 1 hypothesis proposer in the SOW Generator competing-hypotheses pipeline. Receives a project brief and independently proposes an ambitious SOW scope that maximizes client impact and value, including all requirements plus proactive additions. Runs in parallel with scope-agent-a and scope-agent-b.
Phase 3 synthesis lead in the SOW Generator competing-hypotheses pipeline. Receives all scope proposals from Phase 1 and risk/pricing evaluations from Phase 2. Builds the scoring matrix, determines the recommended option, writes the final client-ready SOW document in Markdown with three named packages, comparison table, and recommendation. Called after all Phase 1 and Phase 2 agents complete.
Converts meeting transcripts, email threads, and documents into structured Notion tasks. Activates when the user pastes text and wants action items extracted, asks 'what tasks are in this?', or needs to parse any content for to-dos, assignments, and commitments. Handles verb detection, owner inference, deadline parsing, and duplicate checking against HQ Tasks.
Synthesizes multi-source data into a structured daily briefing Notion page. Activates when the user wants a morning summary, daily overview, today's briefing, or asks 'what's on my plate today?' Covers schedule, priority emails, tasks, Slack highlights, and quick stats — handles partial data gracefully when sources are unavailable.
Scans unread emails and produces prioritized highlights using the Eisenhower urgent/important matrix. Activates for inbox highlights, email summaries, morning email scans, or any request to understand what's important in the inbox right now. Scores by sender importance, keywords, action flags, and recency — surfaces up to 10 highlights grouped by quadrant.
Analyzes today's calendar events and generates meeting prep notes with attendee context, pending items, and communication history. Activates for meeting preparation, calendar review, daily meeting summaries, or 'who am I meeting with today?' Classifies meetings by type, scores importance, and cross-references attendees against Gmail and Notion CRM.
Filters and prioritizes Notion tasks by due date for daily briefing inclusion. Activates when the user wants to see tasks due today, overdue items, daily task lists, or asks 'what do I need to do today?' Auto-discovers task databases, groups by project, flags overdue items, and handles multiple databases gracefully.
Aggregates client data from Notion CRM, Gmail, Calendar, and Drive into a comprehensive dossier. Activates when the user wants to load client context, pull client data, build a dossier, or asks 'tell me everything about [client].' Covers multi-source data gathering, deduplication, completeness scoring, and fuzzy matching across data sources.
Produces executive relationship summaries with health scores, sentiment analysis, and engagement metrics. Activates when the user wants a client brief, relationship assessment, engagement check, or asks 'how's our relationship with [client]?' Covers sentiment scoring, risk flagging, health formula calculation, and executive brief formatting.
Researches individual competitors via web search to gather pricing, features, positioning, and reviews. Activates when the user wants to research a competitor, gather intel on a company, or asks 'what does [company] offer?' Covers multi-source intelligence gathering across official sites, review platforms, and community discussions.
Synthesizes competitive research into strategic insights, SWOT analyses, and positioning recommendations. Activates when the user wants a competitive analysis, market positioning review, competitor comparison matrix, or asks 'how do we stack up against the competition?' Identifies market gaps and strategic opportunities.
Extracts structured terms from legal contracts across 7 categories (Payment, Duration, IP, Confidentiality, Liability, Termination, Warranty). Activates when the user has a contract to analyze, wants to extract terms, review an agreement, or asks 'what does this contract say?' Supports PDF, DOCX, MD, and TXT formats with auto contract-type detection.
Evaluates contracts for harmful, unusual, or one-sided clauses using Red/Yellow/Green risk classification. Activates when the user wants to find contract red flags, assess legal risks, check for unfair terms, or asks 'is this contract safe to sign?' Provides plain-English explanations and concrete mitigation suggestions for freelancers and agency founders.
Creates and manages activity records in the CRM Communications database with AI-generated summaries. Activates when the user wants to log an email or meeting to CRM, record a client interaction, or asks 'save this to the client record.' Handles idempotent operations, deduplication, and AI summarization.
Matches email addresses and meeting attendees to CRM contacts using progressive matching. Activates when the user needs to identify which client an email belongs to, resolve a contact, or look up a client from an email address. Uses a 5-step algorithm from exact email match to fuzzy name matching with confidence scoring.
Orchestrates the full sync pipeline from Gmail and Calendar data to CRM Communications records in Notion. Activates when the user wants to sync emails or meetings to CRM, log client activities, or asks 'update my CRM with recent communications.' Handles single-item and batch sync with deduplication and error recovery.
Answers questions from Google Drive documents and generates summaries with inline citations. Activates when the user wants to ask about a document, summarize a Drive file, extract information, or asks 'what does this doc say about [topic]?' Handles multi-format extraction, answer synthesis, confidence assessment, and graceful no-answer responses.
Searches and navigates Google Drive to find files, list folders, and rank results by relevance. Activates when the user wants to find a file in Drive, browse folders, search for documents, or asks 'where's that spreadsheet?' Handles search query formulation, file type detection, folder traversal, and relevance scoring.
Classifies receipts and invoice files into the standard 14-category expense taxonomy during report assembly. Activates when the user wants to categorize expenses, classify receipts, determine expense types, or asks 'what category is this?' Covers vendor-based classification, tax-deductibility flags, budget code mapping, and confidence scoring.
Generates comprehensive Markdown expense reports from invoice data and local receipt files. Activates when the user wants an expense report, spending breakdown, category summary, or asks 'show me my expenses for [period].' Aggregates from Notion Finance DB and local files with 7-section structure, trend analysis, and period-over-period comparisons.
Identifies sent emails awaiting replies and detects promises in email threads. Activates when the user wants to check follow-ups, find unanswered emails, track pending responses, or asks 'who hasn't gotten back to me?' Scans sent mail, detects bidirectional promises, scores urgency by age and relationship importance, and filters noise via smart exclusion rules.
Drafts professional follow-up nudge emails calibrated by elapsed time and relationship type. Activates when the user wants to follow up on an unanswered email, write a reminder, nudge someone, or asks 'how should I follow up on this?' Handles 3 escalation levels (gentle/firm/urgent) with tone matching for clients, colleagues, and vendors — avoids passive-aggression and common anti-patterns.
Formats goal progress into visual dashboards with tables, progress bars, and Mermaid Gantt charts. Activates when the user wants a goal dashboard, progress report, timeline visualization, or asks 'show me a visual of my goals.' Covers dashboard layout, Gantt generation, and formatted report assembly.
Manages the full goal lifecycle: creation, milestone tracking, progress updates, and closure. Activates when the user wants to set, create, update, or close a goal, log a milestone, or asks 'add a new goal for [objective].' Handles CRUD operations, progress calculation, and Notion database management.
Analyzes goal health with RAG scoring, velocity tracking, and projected completion dates. Activates when the user wants to check goal progress, see a health dashboard, check velocity, or asks 'am I on track for [goal]?' Covers blocker detection, trend analysis, and milestone completion forecasting.
Computes composite health scores (0-100) for CRM clients from 5 weighted metrics. Activates when the user wants to check client health, find at-risk clients, score relationships, or asks 'which clients need attention?' Covers Last Contact, Response Time, Open Tasks, Payment Status, and Sentiment with RAG classification and risk flagging.
Extracts sentiment signals from Gmail and Calendar data to compute a client sentiment score (0-100). Activates when the user wants to analyze client sentiment, check communication tone, detect sentiment shifts, or asks 'how's my relationship with this client?' Classifies signals as positive/neutral/negative and feeds into the broader health scoring pipeline.
Turns emails into structured action items with owners, deadlines, and priorities. Activates when the user wants to extract tasks from email, find action items, pull to-dos from their inbox, or convert email requests into trackable work — including implicit asks like 'what do I need to do from these emails?' Handles verb detection, owner inference, and deadline parsing.
Email classification and routing for inbox management. Use this whenever the user mentions email triage, sorting inbox, prioritizing messages, categorizing emails, or dealing with email overload — even if they don't say 'triage' explicitly. Also activates for questions about email categories, VIP handling, or archive rules. Powers the 5-category classification system with needs_response and archivable flags.
Assigns priority scores (1-5) to emails using an Eisenhower matrix framework. Activates when the user wants to rank, score, or prioritize emails, figure out what's urgent, or understand why an email got a certain priority — including VIP boost logic, keyword detection, and spam overrides.
Generates contextual email reply drafts that mirror the sender's tone and style. Activates when the user wants to draft replies, compose responses, write back to someone, or handle emails that need a response — even 'help me answer these emails' or 'what should I say back?' Includes confidence scoring and commitment safety boundaries.
Detects email formality level and mirrors the sender's communication style in drafted replies. Activates for tone analysis, formality matching, style mirroring, or when the user wants responses that 'sound right' for a given relationship — covers formal, professional, casual, and internal registers with cultural sensitivity.
Scans Founder OS plugin deployment, scores coverage by business area, and produces an actionable automation scorecard. Used by /founder-os:audit:scan and /founder-os:audit:report commands.
Auto-inject relevant memories before plugin execution
Loads structured business context files into plugin execution context. Activates at the start of any plugin command to provide business knowledge, current strategy, and operational data. Plugins inline the loading logic directly (same pattern as gws CLI usage).
Read Google Calendar events and check availability using gws CLI. Use this skill when any Founder OS plugin needs to list events, check schedules, or query free/busy status — replaces Google Calendar MCP server read operations.
Create, update, and delete Google Calendar events using gws CLI. Use this skill when any Founder OS plugin needs to modify calendar events — replaces Google Calendar MCP server write operations.
Core gws CLI conventions for Founder OS plugins. Use this skill whenever working with any Google Workspace data (Gmail, Calendar, Drive) in any Founder OS plugin. Covers authentication checks, output formatting, error handling, and rate limit awareness.
Search, list, and retrieve Google Drive files using gws CLI. Use this skill when any Founder OS plugin needs to find, read, or export Drive files — replaces Google Drive MCP server read operations.
Upload, create, and update Google Drive files using gws CLI. Use this skill when any Founder OS plugin needs to write files to Drive — replaces Google Drive MCP server write operations.
Read Gmail messages and threads using gws CLI. Use this skill when any Founder OS plugin needs to search, list, or retrieve email messages — replaces Gmail MCP server read operations.
Send emails, create drafts, trash messages, and modify labels using gws CLI. Use this skill when any Founder OS plugin needs to write or modify Gmail data — replaces Gmail MCP server write operations.
MCP-to-gws CLI migration reference for Founder OS plugins. Use this skill when migrating a plugin from Google MCP servers to gws CLI commands, or when debugging migration issues.
Event observation system for the Adaptive Intelligence Engine. Defines event schema, observation conventions, and annotation templates that plugins use to emit structured events during execution.
Tier 1 learning — detects user output preferences from repeated corrections and injects them as instructions into future plugin runs.
Learning cycle for the Adaptive Intelligence Engine. Implements the Observe-Retrieve-Judge-Distill-Consolidate-Apply cycle for detecting and applying user preference patterns.
Seed data of known fallback paths for all 30 Founder OS plugins. Used by the self-healing module when a data source is classified as degradable.
Error classification, retry engine, and graceful degradation for Founder OS plugins. Classifies errors into four categories and applies appropriate recovery strategies.
Master reference for the Adaptive Intelligence Engine. Describes the four modules (hooks, learning, self-healing, routing), how plugins integrate, and the relationship to the Memory Engine.
Core API for reading and writing cross-plugin shared memory in Founder OS. Use this skill whenever any plugin needs to store, retrieve, or query persistent memories. Covers initialization, store/retrieve/delete operations, confidence mechanics, and decay rules.
Detect usage patterns and create/promote memories automatically after plugin execution
Adds --schedule flag support to any Founder OS plugin command. Generates P27 Workflow Automator YAML files behind the scenes so users can schedule recurring plugin execution without knowing P27 exists.
Bidirectional sync between the local Intelligence SQLite database and the [FOS] Intelligence Notion database. Reads from both the patterns and healing_patterns tables.
Assigns accounting categories to invoice line items using a standard 14-category taxonomy. Activates when the user wants to categorize expenses, classify line items, assign budget codes, or asks 'what type of expense is this?' Covers tax-deductibility rules, budget code mapping, and confidence scoring.
Extracts structured data from invoice files in PDF, JPG, PNG, and TIFF formats. Activates when the user has an invoice to process, wants to read an invoice PDF, parse billing documents, or asks 'what does this invoice say?' Handles OCR processing, field extraction, confidence scoring, and multi-format detection.
Synthesizes sourced answers with inline citations from retrieved knowledge base documents. Activates when the user asks a question that should be answered from docs, wants cited answers, or asks 'what does our knowledge base say about [topic]?' Handles multi-source reconciliation, confidence rating, and graceful no-answer pathways.
Searches across Notion pages/databases and Google Drive to find relevant knowledge base documents. Activates when the user wants to search the knowledge base, find a document, look something up, or asks 'what do we have on [topic]?' Scores results by relevance and extracts content previews for downstream answer synthesis.
Crawls Notion and Google Drive to build a structured catalog of knowledge sources with classification and freshness tracking. Activates when the user wants to index the knowledge base, refresh the source catalog, scan for new documents, or asks 'update the knowledge base index.' Uses a 9-type taxonomy with metadata extraction.
Captures and stores learnings, insights, and observations in the Learning Log. Activates when the user wants to log something they learned, capture an insight, record an observation, or says 'I just realized [something]' — even if they don't explicitly say 'log a learning.' Handles categorization, tagging, and source attribution.
Searches and retrieves past learnings from the Learning Log by topic, category, or date. Activates when the user wants to find past insights, search their learning history, or asks 'what did I learn about [topic]?' Supports keyword search, category filtering, date ranges, and source-based browsing.
Synthesizes weekly learning summaries with theme detection, streak tracking, and trend analysis. Activates when the user wants a weekly learning review, insight trends, streak check, or asks 'what themes emerged from my learnings this week?' Covers pattern recognition, cross-learning connections, and growth metrics.
Applies an authentic founder writing voice to LinkedIn post content. Activates when the user wants content that sounds like a real founder, needs style matching, or asks 'make this sound more like me.' Calibrates tone, sentence rhythm, and opinion strength for professional authenticity on LinkedIn.
Crafts high-impact opening lines for LinkedIn posts designed to stop the scroll. Activates when the user wants a stronger hook, better opening line, or asks 'how should I start this LinkedIn post?' Covers proven hook formulas, curiosity gaps, and pattern-interrupt techniques that drive engagement.
Generates LinkedIn posts optimized for platform engagement and formatting best practices. Activates when the user wants to write, draft, or create LinkedIn content, or asks 'help me post about [topic] on LinkedIn.' Covers post structure, formatting rules, hashtag strategy, and engagement-driven writing patterns.
Processes meeting transcripts to extract summaries, decisions, follow-up commitments, and topic tags. Activates when the user wants to analyze a meeting, summarize a transcript, find what was decided, or pull action items from meeting notes — even a casual 'what happened in that meeting?' Runs four independent extraction pipelines on any transcript format.
Gathers and normalizes meeting transcripts from Fireflies, Otter, Gemini, Notion, or local files into a unified format. Activates when the user has a transcript to import, wants to pull meeting notes from a service, or provides a file to analyze. Auto-detects the source format and normalizes speaker labels, dates, and structure for downstream analysis.
Bidirectional sync between local memory store and [FOS] Memory Notion database. Reference this skill whenever pushing memories to Notion or pulling user changes back to the local SQLite store.
Gathers overnight data from Gmail, Calendar, Notion, Slack, and Drive into a consolidated morning update. Activates when the user wants a morning sync, overnight catch-up, or asks 'what happened while I was away?' Handles multi-source data gathering with overnight window calculation and graceful degradation when sources are unavailable.
Ranks and synthesizes priorities across all morning data sources into a unified action list. Activates when the user wants to know what to do first, needs morning priorities ranked, or asks 'what's most important today?' Provides cross-source scoring, Top-N extraction, and urgency windowing.
Applies a consistent, opinionated founder writing voice to newsletter content. Activates when the user wants to write in founder voice, match their style, rewrite content to sound more authentic, or needs newsletter prose that reads like a real founder — not a marketer. Covers tone calibration, sentence rhythm, opinion injection, practical framing, and anti-patterns.
Assembles researched topics into structured, publishable newsletter drafts with Substack-compatible formatting. Activates when the user wants to write, draft, or outline a newsletter, format content for Substack, or asks 'turn this research into a newsletter.' Covers the four-part structure (Hook, Main Content, Takeaways, CTA), link attribution, and visual rhythm.
Performs multi-source web research to discover, score, and organize findings for newsletter content. Activates when the user wants to research a topic, find trending material, gather sources for a newsletter, or asks 'what's new in [topic]?' Searches across web, GitHub, Reddit, and blogs with recency scoring and deduplication.
Designs and creates Notion databases from natural language descriptions with typed properties. Activates when the user wants to build a database, design a schema, set up a tracker, deploy a template, or asks 'create a table in Notion for [thing].' Includes 5 pre-built business templates and smart property type inference.
Handles Notion workspace operations: page creation, database queries, content updates, and workspace search. Activates when the user wants to create, search, query, update, or browse anything in Notion — even casual 'find that page in Notion' or 'add this to my database.' Covers MCP tool usage, workspace discovery, and batch operations.
Deep-dives into a single meeting to build a comprehensive preparation dossier. Activates when the user wants to prep for a specific meeting, research attendees, find related documents, or asks 'what should I know before this call?' Pulls from Calendar, Gmail, Notion CRM, and Google Drive to surface attendee profiles, open items, related docs, and communication history.
Generates framework-based talking points and discussion guides tailored to meeting type. Activates for talking point requests, discussion guides, meeting agendas, or 'what should I bring up in this meeting?' Selects from SPIN, GROW, SBI, and other frameworks based on meeting classification, with 'Do NOT Mention' guardrails and context-aware customization.
Manages the team prompt library: storing, retrieving, searching, and organizing reusable prompts. Activates when the user wants to save, find, list, or share prompts, or asks 'show me my saved prompts.' Handles CRUD operations, tagging, versioning, and team sharing workflows.
Analyzes and improves prompt quality through scoring, rewriting, and best-practice enforcement. Activates when the user wants to optimize, improve, score, or fix a prompt, or asks 'make this prompt better.' Covers clarity scoring, context injection, specificity improvements, and vague-prompt detection.
Designs 3-tier pricing packages for client proposals using the good-better-best framework. Activates when the user wants to price a proposal, create pricing tiers, figure out what to charge, or asks 'how should I package this?' Covers value differentiation, comparison tables, payment terms, milestone structures, and ROI framing.
Generates professional client proposals with 7 structured sections and 3 pricing packages. Activates when the user wants to create, write, or draft a proposal for a client, or asks 'help me put together a proposal.' Produces Markdown output plus a SOW-compatible brief for handoff to the SOW Generator.
Converts analyzed data into Mermaid charts and Markdown tables for report visualizations. Activates when the user wants to create charts, visualize data, add diagrams, or asks 'show me this as a graph.' Covers bar, line, pie, Gantt, and flowchart types with automatic type selection based on data characteristics.
Transforms structured data into quantitative findings, trends, and insights. Activates when the user wants to analyze data, find trends, calculate metrics, compare groups, detect outliers, or asks 'what does this data tell us?' Covers descriptive statistics, time-series trends, correlation analysis, outlier detection, and data quality assessment.
Extracts and normalizes data from CSV, JSON, text files, Notion databases, and Google Drive (via gws CLI) into a unified schema. Activates when the user wants to load data, parse files, import data sources, or prepare raw data for report generation — handles delimiter detection, encoding, type inference, and cross-source merging.
Produces concise, decision-ready executive summaries from report data. Activates when the user wants a TLDR, management summary, key takeaways, or top-level overview — including 'give me the highlights' or 'what are the main findings?' Covers metric highlighting, recommendation prioritization, and risk/opportunity framing.
Transforms analyzed data into polished, structured Markdown reports. Activates when the user wants to write a report, draft sections, create a narrative from data, or asks 'turn this analysis into a report.' Covers report structure, tone calibration, data narration, heading hierarchy, and audience-appropriate length.
Generates a structured 6-section weekly review page in Notion from tasks, calendar, and email data. Activates for weekly reviews, end-of-week summaries, 'what did I accomplish?', weekly wins/blockers, or any retrospective on the past week. Auto-discovers task databases, detects blockers via multi-signal analysis, and generates next-week priorities.
Discovers and enumerates active Founder OS plugin databases to count task completions for ROI calculations. Activates when the system needs to scan plugins, check usage across the ecosystem, or gather task counts. Iterates across all plugin Notion databases using the HQ consolidation map for accurate cross-plugin metrics.
Converts plugin task counts into time and dollar savings metrics with visual ROI reports. Activates when the user wants to calculate time savings, compute ROI, see productivity gains, or asks 'how much time am I saving with Founder OS?' Covers the calculation engine, Mermaid charts, and structured report output.
Domain knowledge for creating and managing the Founder OS HQ Notion workspace.
Ranks and filters Slack messages by relevance, separating signal from noise. Activates when the user wants to prioritize Slack messages, find important discussions, extract action items from Slack, or asks 'what's important in Slack?' Covers noise filtering, signal scoring, @mention detection, action item extraction, and thread deduplication.
Retrieves and classifies Slack workspace messages for digest generation. Activates when the user wants to scan Slack, catch up on channels, find decisions, or asks 'what happened in Slack while I was away?' Covers channel scanning, message extraction, thread context, message classification, and decision detection.
Evaluates project risks for each SOW scope option in the competing-hypotheses pipeline. Activates when the user wants to assess risks, identify project red flags, or asks 'what could go wrong with this scope?' Scores risks by likelihood and impact, flags scope-specific concerns, and provides mitigation recommendations.
Defines project scope and work breakdown structures for Statement of Work documents. Activates when the user wants to scope a project, define deliverables, break down requirements, or asks 'what should be in the SOW scope?' Generates conservative, balanced, and ambitious scope interpretations for the competing-hypotheses pipeline.
Synthesizes scope definitions and risk assessments into a polished Statement of Work document. Activates when the user wants to write, generate, or format a SOW, or asks 'create the final SOW document.' Produces professional client-facing output with multiple scope options, risk annotations, and clear deliverable specifications.
Produces 7-section Standard Operating Procedures with Mermaid flowcharts and decision point mapping. Activates when the user wants to write an SOP, generate a process document, create a flowchart, or asks 'create an SOP for [process].' Covers structured formatting, decision trees, handoff protocols, and visual diagrams.
Parses and structures business processes into standardized workflow documentation with actors, tools, and handoffs. Activates when the user wants to document a workflow, map a process, create a runbook, or asks 'capture how this process works.' Decomposes operational workflows into multi-step documents with complexity assessment.
Designs YAML-based automation workflows with DAG dependency resolution and validation. Activates when the user wants to create, design, or define a workflow, write YAML steps, or asks 'help me build an automation workflow.' Covers schema structure, step definition, dependency chaining, and validation rules.
Runs workflow steps, manages execution context, and handles errors during workflow automation. Activates when the user wants to run, execute, resume, or check status on a workflow, or asks 'start the [name] workflow.' Covers step execution protocol, context passing between steps, error recovery, and Notion execution logging.
Schedules workflows using cron expressions with natural language conversion and OS-level integration. Activates when the user wants to schedule, automate timing, set up recurring runs, or asks 'run this workflow every Monday at 9am.' Covers cron syntax validation, natural language to cron conversion, and schedule management.
Complete collection of battle-tested Claude Code configs from an Anthropic hackathon winner - agents, skills, hooks, rules, and legacy command shims evolved over 10+ months of intensive daily use
Core skills library for Claude Code: TDD, debugging, collaboration patterns, and proven techniques
Context-Driven Development plugin that transforms Claude Code into a project management tool with structured workflow: Context → Spec & Plan → Implement
AI-supervised issue tracker for coding workflows. Manage tasks, discover work, and maintain context with simple CLI commands.
Tools to maintain and improve CLAUDE.md files - audit quality, capture session learnings, and keep project memory current.
Comprehensive C4 architecture documentation workflow with bottom-up code analysis, component synthesis, container mapping, and context diagram generation