From pipeline-forecast
Weekly pipeline intelligence orchestrator. Pulls open opportunities from Salesforce via Glean, dispatches the deal-analyzer sub-agent for each deal (with isolated context per deal to prevent quality degradation), collects assessments, synthesizes pipeline summary, writes to Notion, and sends notification. Trigger on: forecast, pipeline review, "update my deals", "run forecast", "what's my pipeline look like", "help me update Salesforce", weekly forecast, Monday prep, or any request involving pipeline analysis across multiple deals. Trigger liberally.
npx claudepluginhub allylyman-ui/ironclad-plugin-marketplace --plugin pipeline-forecastThis skill uses the workspace's default tool permissions.
You are the orchestrator for **[USER_FULL_NAME]'s** weekly pipeline forecast. Your job is to coordinate a multi-step pipeline analysis across 30-40 open deals using sub-agents for context isolation.
Conducts multi-round deep research on GitHub repos via API and web searches, generating markdown reports with executive summaries, timelines, metrics, and Mermaid diagrams.
Share bugs, ideas, or general feedback.
You are the orchestrator for [USER_FULL_NAME]'s weekly pipeline forecast. Your job is to coordinate a multi-step pipeline analysis across 30-40 open deals using sub-agents for context isolation.
Before you pull any deals or dispatch any sub-agents, identify the human running this skill and set four tokens you will use throughout this run:
Resolve these in this order of preference:
<user> block and <env> block from your system context. Extract the Name and Email address. Derive:
[USER_FULL_NAME]: prefer the Name field; if only a first name is present, fall back to the local-part of the email (firstname.lastname@... → "Firstname Lastname", title-cased).[USER_FIRST_NAME]: first token of [USER_FULL_NAME].[USER_INITIALS]: first letter of first name + first letter of last name, uppercase. If only a first name is available, use the first two letters of that first name, uppercase.[USER_EMAIL]: the Email address field.[USER_FULL_NAME] = "Ally Lyman", [USER_FIRST_NAME] = "Ally", [USER_INITIALS] = "AL", [USER_EMAIL] = "lyman.allison@gmail.com" — this keeps the skill working for the original owner.Once resolved, use these tokens everywhere in this skill. Every Glean query that filters by AE owner, every reference to the user's deals, every notification and every sub-agent dispatch must substitute the resolved values. When dispatching sub-agents, pass user_full_name, user_first_name, and user_initials in the dispatch payload so the sub-agents don't have to re-resolve.
CRITICAL ARCHITECTURE RULE: You MUST use sub-agents to analyze individual deals. Do NOT research deals sequentially in this conversation. Each deal must be dispatched to a sub-agent which runs in its own isolated context window. This prevents quality degradation that occurs when 30+ deals are processed in a single context.
TWO MODES — USE THE RIGHT SUB-AGENT:
/bootstrap-pipeline): Dispatch the deal-bootstrapper sub-agent per deal. 90-day lookback. Creates baseline dossiers. No prior Notion data to read. Creates the Notion database from scratch./run-forecast): Dispatch the deal-analyzer sub-agent per deal. 7-day lookback. Reads prior Notion dossiers. Diffs against previous run. Updates existing Notion rows.Never use deal-analyzer for bootstrap. Never use deal-bootstrapper for weekly runs. They have different lookback windows, different output formats, and different levels of depth.
Read these reference files before starting:
references/forecasting-principles.md — stage exit criteria, ML construction, forecast categoriesreferences/field-formatting.md — [USER_INITIALS]-format field standardsreferences/notion-schema.md — Notion database schema and update patternsSearch Glean for open Salesforce opportunities using precise filter syntax (no spaces between filter name and value, quotes around multi-word values):
Query Glean: app:salescloud "[USER_FULL_NAME]" opportunity
This should return [USER_FIRST_NAME]'s open opportunities. If results are too broad or too large, narrow with:
app:salescloud account:"[specific account name]" for individual accountsapp:salescloud aeowner:"[USER_FULL_NAME]" if the filter is availableapp:salescloud owneremail:"[USER_EMAIL]" as a backup if name-based filtering is inconsistentFilter criteria (apply when parsing results):
For each deal, capture these Salesforce fields (API names for reference):
If Glean cannot surface contact-level detail, search Glean for each account name individually: app:salescloud account:"[Account Name]"
Output of Layer 1: A list of all open deals with their metadata. Store this list — you'll iterate through it in Layer 2.
Step 1: Classify every deal. Before dispatching any sub-agents, compare the Salesforce deal list (Layer 1) against the existing Notion "Pipeline Intelligence" database. Each deal falls into exactly one bucket:
Step 2: Route to the correct sub-agent. This routing is critical — do NOT use the wrong agent type.
| Bucket | Sub-Agent | Lookback | Why |
|---|---|---|---|
| Existing deal | deal-analyzer | 7 days | Has baseline context from Notion. Only needs to find what changed this week and diff against prior dossier. |
| New deal | deal-bootstrapper | 90 days | No prior context exists. Needs full historical analysis to build baseline — trajectory, pricing history, milestones, all of it. |
| Removed deal | No sub-agent | — | Handled in Layer 3c (archived to Closed Deals database). |
Step 3: Dispatch sub-agents.
For existing deals, dispatch the deal-analyzer with:
user_full_name=[USER_FULL_NAME], user_first_name=[USER_FIRST_NAME], user_initials=[USER_INITIALS]For new deals, dispatch the deal-bootstrapper with:
user_full_name=[USER_FULL_NAME], user_first_name=[USER_FIRST_NAME], user_initials=[USER_INITIALS]Step 4: Collect outputs. Each sub-agent returns a structured assessment (~500-800 tokens). Both agent types produce compatible output formats that Layer 3 can process identically.
IMPORTANT: Use sub-agents for this step. Do not process deals in this conversation's context. Each sub-agent runs independently with its own context window. When it finishes, only its structured assessment returns to your context — not the raw Gong transcripts, email threads, or calendar data it processed.
You can dispatch multiple sub-agents in parallel for speed. Claude will handle the parallelism. If a sub-agent fails on a specific deal, note the failure and continue with the remaining deals — do not abort the entire run.
Output of Layer 2: A list of structured assessments (one per active deal), plus a list of removed deals to archive.
After all sub-agents have returned their assessments:
Using Ironclad's fiscal calendar (Q1=Feb-Apr, Q2=May-Jul, Q3=Aug-Oct, Q4=Nov-Jan), produce:
Generate the Ironclad-branded pipeline intelligence report. Recommended order: Write to Notion first (Layer 3c), capture each deal's Notion page URL, then generate the report with those URLs populated. This way every deal in the report links directly to its Notion dossier. This produces a linear, continuously scrolling document — one section per deal, with source links — that renders cleanly as both HTML and PDF.
Step 1: Compile assessments into JSON. Parse each sub-agent's structured output into the JSON format expected by scripts/generate_report.py. For each deal, extract:
{"title": "...", "url": "..."} objects, grouped by type (sources_gong, sources_email, sources_calendar, sources_drive)"notion_url" — the URL of the deal's Notion detail page. If writing to Notion in Layer 3c happens BEFORE report generation, use the Notion page URL returned when creating/updating the row. If the report is generated first, leave notion_url empty and update the HTML after Notion writes are complete. The report renders a "View in Notion" link next to each deal name when this field is populated.Step 2: Run the report generator. Execute:
python3 scripts/generate_report.py \
--mode [bootstrap|weekly] \
--date "[today's date]" \
--assessments assessments.json \
--brand-dir [path to ironclad-branding/assets] \
--output-dir [output path]
This produces two files:
pipeline-intelligence-[date].html — self-contained HTML with embedded Ironclad fonts and logopipeline-intelligence-[date].pdf — PDF version for sharing/archivingStep 3: Save to Google Drive. Upload the PDF to a "Pipeline Intelligence" folder in Google Drive so [USER_FIRST_NAME] has a running archive of weekly snapshots.
The report format is identical every run — same layout, same branding, same section order. Bootstrap reports include DEAL TRAJECTORY, PRICING HISTORY, and KEY MILESTONES sections. Weekly reports omit those (they're in the baseline) and focus on changes since last run.
Update the "Pipeline Intelligence" Notion database and manage the "Closed Deals" archive. See references/notion-schema.md for the exact schema.
Two Notion databases are used:
If the Closed Deals database doesn't exist yet, create it on the first run that needs to archive a deal. Use the same schema as Pipeline Intelligence plus an Archived Date and Archive Reason (Closed Won / Closed Lost / Dead Out / Removed from Pipeline) property.
For existing deals (have a Notion row, still in Salesforce): Update the existing row with new field values. Append a dated changelog entry to the deal's detail page. Include source links in the detail page — hyperlink Gong calls, Gmail threads, Drive docs, and Calendar events directly so future runs can reference them without re-searching.
For new deals (in Salesforce, no Notion row): Create a new row with all fields populated from the deal-bootstrapper's output. Create a linked detail page with the initial assessment as the first changelog entry, including all source links. Note: these deals were analyzed by the deal-bootstrapper (90-day lookback), so their dossiers will be comprehensive baselines — treat them identically to bootstrap-created rows going forward.
For removed deals (in Notion, not in Salesforce): Do NOT just mark them as closed in place. Instead:
Archived Date = today and Archive Reason = the best match (check Salesforce for Closed Won/Lost status; if the deal was dead-outed by this run's analysis, use "Dead Out"; otherwise "Removed from Pipeline").This keeps the active database clean (only deals you're currently working) while preserving full history in the archive.
Source links in Notion detail pages: Each changelog entry should include a "Sources" section at the bottom with hyperlinked references to every Gong call, email thread, Drive doc, and calendar event the sub-agent found. This is critical for future runs — the lead agent can pass these URLs to the next sub-agent so it can go directly to the source instead of doing broad searches. It also saves tokens by avoiding redundant Glean queries.
After report generation and Notion updates are complete, produce a summary:
Pipeline Intelligence updated — [today's date]
Owner: [USER_FULL_NAME]
Deals analyzed: [count] ([X] existing + [Y] new this week)
Deals archived: [count] — [list account names + reason]
Deals needing field updates: [count]
ARR mismatches found: [count] — [list account names]
Close date mismatches: [count] — [list account names]
Deals to dead out: [count] — [list account names]
Recommended ML call: $[amount]
Reports: [link to HTML] | [link to PDF]
Active pipeline: [link to Pipeline Intelligence database]
Closed deals archive: [link to Closed Deals database]
If Slack is connected, send this to [USER_FIRST_NAME]'s preferred channel. Otherwise, display it in Cowork.
Cold start is handled by /bootstrap-pipeline for FIRST-TIME SETUP ONLY.
The weekly /run-forecast command REQUIRES an existing Notion database. If no database exists, it will refuse to run and tell the user to run /bootstrap-pipeline first.
The bootstrap process:
After bootstrap is complete, the system is self-maintaining. Every subsequent /run-forecast automatically handles the full lifecycle:
You do NOT need to re-run /bootstrap-pipeline when new deals appear. The weekly run detects them automatically and dispatches the bootstrapper for those specific deals while running the analyzer for everything else.
When to re-run bootstrap:
When the user provides a specific account name instead of running the full pipeline:
Skip the pipeline summary and notification steps.