Generates styled Word company tear sheets from S&P Capital IQ data for equity research, M&A, corp dev, or sales audiences. Handles public/private firms.
From sp-globalnpx claudepluginhub kensho-technologies/spglobal-agent-skills --plugin sp-globalThis skill uses the workspace's default tool permissions.
LICENSEreferences/corp-dev.mdreferences/equity-research.mdreferences/ib-ma.mdreferences/sales-bd.mdProvides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Guides Payload CMS config (payload.config.ts), collections, fields, hooks, access control, APIs. Debugs validation errors, security, relationships, queries, transactions, hook behavior.
Generate audience-specific company tear sheets by pulling live data from S&P Capital IQ via the S&P Global MCP tools and formatting the result as a professional Word document.
These are sensible defaults. To customize for your firm's brand, modify this section — common changes include swapping the color palette, changing the font (Calibri is standard at many banks), and updating the disclaimer text.
Colors:
Typography (sizes in half-points for docx-js):
Company Header Banner:
borders: none and shading: none on all cells. Set column widths to 50% each. Place left-column fields (ticker, HQ, founded, employees) as separate paragraphs in the left cell. Place right-column fields (market cap, EV, stock price, shares outstanding) in the right cell. Each field is a single paragraph: bold run for the label, regular run for the value.Section Headers:
paragraph.borders.bottom = { style: BorderStyle.SINGLE, size: 1, color: "CCCCCC" }. Do not use doc.addParagraph() with a separate horizontal rule element. Do not use thematicBreak. The border must be on the heading paragraph itself with 0pt spacing after, so the rule sits tight against the header text.Bullet Formatting:
Tables (financial data only):
Layout:
Number formatting:
Footer (document footer, not inline): Place the source attribution and disclaimer in the actual document footer (repeated on every page), not as inline body text at the bottom. The footer is exactly two lines, centered, on every page:
You MUST use these exact functions to create document elements. Do NOT write custom docx-js styling code. Copy these functions into your generated Node script and call them. The Style Configuration prose above remains as documentation; these functions are the enforcement mechanism.
const docx = require("docx");
const {
Document, Paragraph, TextRun, Table, TableRow, TableCell,
WidthType, AlignmentType, BorderStyle, ShadingType,
Header, Footer, PageNumber, HeadingLevel, TableLayoutType,
convertInchesToTwip
} = docx;
// ── Color constants ──
const COLORS = {
PRIMARY: "1F3864",
ACCENT: "2E75B6",
TABLE_HEADER_FILL: "D6E4F0",
TABLE_ALT_ROW: "F2F2F2",
TABLE_BORDER: "CCCCCC",
HEADER_TEXT: "FFFFFF",
FOOTER_TEXT: "666666",
};
const FONT = "Arial";
// ── 1. createHeaderBanner ──
// Returns an array of docx elements: [banner paragraph, key-value table]
function createHeaderBanner(companyName, leftFields, rightFields) {
// leftFields / rightFields: arrays of { label: string, value: string }
const banner = new Paragraph({
children: [
new TextRun({
text: companyName,
bold: true,
size: 36, // 18pt
color: COLORS.HEADER_TEXT,
font: FONT,
}),
],
shading: { type: ShadingType.CLEAR, color: "auto", fill: COLORS.PRIMARY },
spacing: { after: 0 },
alignment: AlignmentType.LEFT,
});
function buildCellParagraphs(fields) {
return fields.map(
(f) =>
new Paragraph({
children: [
new TextRun({ text: f.label + " ", bold: true, size: 18, font: FONT }),
new TextRun({ text: f.value, size: 18, font: FONT }),
],
spacing: { after: 40 },
})
);
}
const noBorder = { style: BorderStyle.NONE, size: 0, color: "FFFFFF" };
const noBorders = { top: noBorder, bottom: noBorder, left: noBorder, right: noBorder };
const noShading = { type: ShadingType.CLEAR, color: "auto", fill: "FFFFFF" };
const kvTable = new Table({
rows: [
new TableRow({
children: [
new TableCell({
children: buildCellParagraphs(leftFields),
width: { size: 50, type: WidthType.PERCENTAGE },
borders: noBorders,
shading: noShading,
}),
new TableCell({
children: buildCellParagraphs(rightFields),
width: { size: 50, type: WidthType.PERCENTAGE },
borders: noBorders,
shading: noShading,
}),
],
}),
],
width: { size: 100, type: WidthType.PERCENTAGE },
});
return [banner, kvTable];
}
// ── 2. createSectionHeader ──
// Returns a single Paragraph with bottom border rule
function createSectionHeader(text) {
return new Paragraph({
children: [
new TextRun({
text: text,
bold: true,
size: 22, // 11pt
color: COLORS.PRIMARY,
font: FONT,
}),
],
spacing: { before: 240, after: 0 }, // 12pt before, 0pt after
border: {
bottom: { style: BorderStyle.SINGLE, size: 1, color: COLORS.TABLE_BORDER },
},
});
}
// ── 3. createTable ──
// headers: string[], rows: string[][], options: { accentHeader?, fontSize? }
function createTable(headers, rows, options = {}) {
const fontSize = options.fontSize || 17; // 8.5pt default
const headerFill = options.accentHeader ? COLORS.ACCENT : COLORS.TABLE_HEADER_FILL;
const headerTextColor = options.accentHeader ? COLORS.HEADER_TEXT : "000000";
const cellBorders = {
top: { style: BorderStyle.SINGLE, size: 1, color: COLORS.TABLE_BORDER },
bottom: { style: BorderStyle.SINGLE, size: 1, color: COLORS.TABLE_BORDER },
left: { style: BorderStyle.SINGLE, size: 1, color: COLORS.TABLE_BORDER },
right: { style: BorderStyle.SINGLE, size: 1, color: COLORS.TABLE_BORDER },
};
const cellMargins = { top: 40, bottom: 40, left: 80, right: 80 };
function isNumeric(val) {
if (typeof val !== "string") return false;
const cleaned = val.replace(/[,$%()]/g, "").trim();
return cleaned !== "" && !isNaN(cleaned);
}
// Header row
const headerRow = new TableRow({
children: headers.map(
(h) =>
new TableCell({
children: [
new Paragraph({
children: [
new TextRun({
text: h,
bold: true,
size: fontSize,
color: headerTextColor,
font: FONT,
}),
],
}),
],
shading: { type: ShadingType.CLEAR, color: "auto", fill: headerFill },
borders: cellBorders,
margins: cellMargins,
})
),
});
// Data rows with alternating shading
const dataRows = rows.map((row, rowIdx) => {
const fill = rowIdx % 2 === 1 ? COLORS.TABLE_ALT_ROW : "FFFFFF";
return new TableRow({
children: row.map((cell, colIdx) => {
const align = colIdx > 0 && isNumeric(cell)
? AlignmentType.RIGHT
: AlignmentType.LEFT;
return new TableCell({
children: [
new Paragraph({
children: [
new TextRun({ text: cell, size: fontSize, font: FONT }),
],
alignment: align,
}),
],
shading: { type: ShadingType.CLEAR, color: "auto", fill: fill },
borders: cellBorders,
margins: cellMargins,
});
}),
});
});
return new Table({
rows: [headerRow, ...dataRows],
width: { size: 100, type: WidthType.PERCENTAGE },
});
}
// ── 4. createBulletList ──
// items: string[], style: "synthesis" | "informational"
function createBulletList(items, style = "synthesis") {
const indent =
style === "synthesis"
? { left: 360, hanging: 180 } // 360 DXA left, hanging indent for bullet
: { left: 180 }; // 180 DXA, no hanging
return items.map(
(item) =>
new Paragraph({
children: [
new TextRun({ text: "• ", font: FONT, size: 18 }),
new TextRun({ text: item, font: FONT, size: 18 }),
],
indent: indent,
spacing: { after: 60 },
})
);
}
// ── 5. createFooter ──
// date: string (e.g., "February 23, 2026")
function createFooter(date) {
return new Footer({
children: [
new Paragraph({
children: [
new TextRun({
text: `Data: S&P Capital IQ via Kensho | Analysis: AI-generated | ${date}`,
italics: true,
size: 14, // 7pt
color: COLORS.FOOTER_TEXT,
font: FONT,
}),
],
alignment: AlignmentType.CENTER,
}),
new Paragraph({
children: [
new TextRun({
text: "For informational purposes only. Not investment advice.",
italics: true,
size: 14,
color: COLORS.FOOTER_TEXT,
font: FONT,
}),
],
alignment: AlignmentType.CENTER,
}),
],
});
}
Usage in generated scripts:
createHeaderBanner(...) instead of manually building banner paragraphs and tablescreateSectionHeader(...) for every section title — never manually set paragraph borderscreateTable(...) for all tabular data — financial summaries, trading comps, M&A activity, relationship tables, funding history, etc. Pass { accentHeader: true } for M&A activity tables (IB/M&A template). For non-numeric tables (e.g., relationships, ownership), the function still works correctly — it only right-aligns cells that contain numeric values.createBulletList(items, "synthesis") for earnings highlights, strategic fit, integration considerations, and conversation starterscreateBulletList(items, "informational") for relationship entriescreateFooter(date) to the Document constructor's footers.default propertyWhat these functions eliminate:
ShadingType.CLEAR everywhere)border.bottom on the paragraph itself)borders: none)• character only)Gather up to four things before proceeding:
If the user doesn't specify an audience, ask.
Read the corresponding reference file from this skill's directory:
references/equity-research.mdreferences/ib-ma.mdreferences/corp-dev.mdreferences/sales-bd.mdEach reference defines sections, a query plan, formatting guidance, and page length defaults.
First: Create the intermediate file directory:
mkdir -p /tmp/tear-sheet/
Use the S&P Global MCP tools (also known as the Kensho LLM-ready API). Claude will have access to structured tools for financial data, company information, market data, consensus estimates, earnings transcripts, M&A transactions, and business relationships. The query plans in each reference file describe what data to retrieve for each section — map these to the appropriate S&P Global tools available in the conversation.
After each query step, immediately write the retrieved data to the intermediate file(s) specified in the reference file's query plan. Do not defer writes — data written to disk is protected from context degradation in long conversations.
Query strategy: Each reference file includes a query plan with 4-6 data retrieval steps. These are starting points, not rigid constraints. Prioritize data completeness over minimizing calls:
User-specified comps: If the user provided comparable companies, query financials and multiples for each comp explicitly. If no comps were provided, use whatever peer data the tools return, or identify peers from the company's sector using the competitors tool.
Optional context from the user: Listen for additional context the user provides naturally. If they mention who the acquirer is ("we're looking at this for our platform"), what they sell ("we sell data analytics to banks"), or who the likely buyers are ("this would be interesting to Salesforce or Microsoft"), incorporate that context into the relevant synthesis sections (Strategic Fit, Conversation Starters, Deal Angle). Don't prompt for this information — just use it if offered.
Private company handling: CIQ includes private company data, so query the same way. However, expect sparser results. When generating for a private company:
After all data collection is complete and intermediate files are written, compute all derived metrics in a single dedicated pass. This is a calculation-only step — no new MCP queries.
Read all intermediate files back into context, then compute:
Validation (moved from Arithmetic Validation): During this calculation pass, enforce all arithmetic checks:
If a validation fails: attempt recalculation from raw data. If still inconsistent, flag the metric as "N/A" rather than publishing incorrect numbers. Quiet math errors in a tear sheet destroy credibility.
Write results to /tmp/tear-sheet/calculations.csv with columns: metric,value,formula,components
Example rows:
metric,value,formula,components
gross_margin_fy2024,72.4%,gross_profit/revenue,"9524/13159"
revenue_growth_fy2024,12.3%,(current-prior)/prior,"13159/11716"
net_debt_fy2024,2150,total_debt-cash,"4200-2050"
Before generating the document, verify that all intermediate files are present and populated.
Read each intermediate file via separate read operations and print a verification summary:
=== Tear Sheet Data Verification ===
company-profile.txt: ✓ (12 fields)
financials.csv: ✓ (36 rows)
segments.csv: ✓ (8 rows)
valuation.csv: ✓ (5 rows)
calculations.csv: ✓ (18 rows)
earnings.txt: ✓ (populated)
relationships.txt: ⚠ MISSING
peer-comps.csv: ✓ (12 rows)
================================
Soft gate: If any file expected for the current audience type is missing or empty, print a warning but continue. The tear sheet handles missing data gracefully with "N/A" and section skipping. However, the warning ensures visibility into what data was lost.
Critical rule: The files — not your memory of earlier conversation — are the single source of truth for every number in the document. When generating the DOCX in Step 4, read values from the intermediate files. Do not rely on conversation context for financial data.
Read /mnt/skills/public/docx/SKILL.md for docx creation mechanics (docx-js via Node). Apply the Style Configuration above plus the section-specific formatting in the reference file.
Page length defaults (user can override):
If content exceeds the target, each reference file specifies which sections to cut first.
Output filename: [CompanyName]_TearSheet_[Audience]_[YYYYMMDD].docx
Example: Nvidia_TearSheet_CorpDev_20260220.docx
Save to /mnt/user-data/outputs/ and present to the user.
These override everything else:
All data retrieved from MCP tools must be persisted to structured intermediate files before document generation. These files — not conversation context — are the single source of truth for every number in the document.
Setup: At the start of Step 3, create the working directory:
mkdir -p /tmp/tear-sheet/
Write-after-query mandate: After each MCP query step completes, immediately write the retrieved data to the appropriate intermediate file(s). Do not wait until all queries finish. Each reference file's query plan specifies which file(s) to write after each step.
File schemas:
| File | Format | Columns / Structure | Used By |
|---|---|---|---|
/tmp/tear-sheet/company-profile.txt | Key-value text | name, ticker, exchange, HQ, sector, industry, founded, employees, market_cap, enterprise_value, stock_price, 52wk_high, 52wk_low, shares_outstanding, beta, ownership | All |
/tmp/tear-sheet/financials.csv | CSV | period,line_item,value,source | All |
/tmp/tear-sheet/segments.csv | CSV | period,segment_name,revenue,source | ER, IB, CD |
/tmp/tear-sheet/valuation.csv | CSV | metric,trailing,forward,source | ER, IB, CD |
/tmp/tear-sheet/consensus.csv | CSV | metric,fy_year,value,source | ER |
/tmp/tear-sheet/earnings.txt | Structured text | Quarter, date, key quotes, guidance, key drivers | ER, IB, Sales |
/tmp/tear-sheet/relationships.txt | Structured text | Customers, suppliers, partners, competitors — each with descriptors | IB, CD, Sales |
/tmp/tear-sheet/peer-comps.csv | CSV | ticker,metric,value,source | ER, IB, CD |
/tmp/tear-sheet/ma-activity.csv | CSV | date,target,deal_value,type,rationale,source | IB, CD |
/tmp/tear-sheet/calculations.csv | CSV | metric,value,formula,components | All (written in Step 3b) |
Abbreviations: ER = Equity Research, IB = IB/M&A, CD = Corp Dev, Sales = Sales/BD.
Not every audience type uses every file — the reference files define which query steps apply. Files not relevant to the current audience type need not be created.
Raw values only. Intermediate files store raw values as returned by the tools. Do not pre-compute margins, growth rates, or other derived metrics in these files — that happens in Step 3b.
Page budget enforcement: Each reference file specifies a default page length and a numbered cut order. If the rendered document exceeds the target, apply cuts in the order specified — do not attempt to shrink font sizes or margins below the template minimums. The cut order is a strict priority stack: cut section 1 completely before touching section 2.
→ Arithmetic validation is now enforced in Step 3b (Calculate Derived Metrics). All margin calculations, growth rates, segment totals, percentage columns, and valuation cross-checks are validated during the dedicated calculation pass, before document generation begins. See Step 3b for the full validation checklist.