By lyndonkl
Comprehensive collection of 33 production-ready skills for strategic thinking, decision-making, research methods, architecture design, and problem-solving. Includes frameworks like Bayesian reasoning, kill criteria, layered reasoning, information architecture, and more.
npx claudepluginhub lyndonkl/claude --plugin thinking-frameworks-skillsEvaluates M&A targets by computing standalone value, synergy value, and maximum acquisition price. Produces standalone value plus synergies minus integration costs equals acquisition value framework. Use when evaluating acquisition targets, computing synergy value, determining bid price, analyzing mergers, or when user mentions M&A, acquisition valuation, synergy analysis, merger premium, or target valuation.
Advises on capital allocation decisions including financing mix (debt vs equity), dividend policy, share buybacks, and project investment evaluation. Integrates financial analysis, cost of capital, capital structure optimization, dividend/buyback policy, and project NPV/IRR analysis into a unified recommendation. Use when user asks about capital allocation strategy, optimal debt level, dividend policy, project investment decisions, whether to raise debt or equity, or how to deploy excess cash.
An orchestrating agent that collaboratively helps designers apply cognitive science principles to create effective visual interfaces, data visualizations, educational content, and presentations. Guides users through cognitive foundations, information architecture, D3 visualization implementation, storytelling, design evaluation, and fallacy prevention. Use when user mentions cognitive design, visual hierarchy, dashboard design, data visualization, design review, cognitive load, or creating cognitively aligned interfaces.
End-to-end company analysis pipeline for standard, profitable companies. Orchestrates business narrative, financial statement cleanup, cost of capital estimation, intrinsic (DCF) and relative (multiples) valuation, capital structure optimization, dividend/buyback policy assessment, and final valuation reconciliation into an investment recommendation. Use when user asks for a complete company analysis, equity valuation, fair value estimate, or investment recommendation for a publicly traded, profitable company.
Shape-watcher for the substacker publication. Every 4-6 weeks reads the accumulating corpus, proposes emerging sections (when active-launch is not pre-planned), audits drift on existing sections, maintains section-map.md and per-section profiles. Also handles active mode — classifies drafts into sections, derives per-section voice overlays once a section has 3+ posts, hands off visual identity to cognitive-design-architect. Use every 4-6 weeks for the standard cycle, or on-demand when launching a section. Trigger keywords: curator, sections, section map, section cluster, drift audit, prune, section launch, classify post.
Turns each published substacker essay into platform-native rewrites. Primary platforms: LinkedIn post + Substack Note + cross-poster blurb (writer's preferred distribution surface). Optional: X thread (generated only if the essay translates well to X; otherwise skipped cleanly). Platform-native reshaping, not paste-same-text-everywhere. Preserves the writer's voice. User posts manually — no auto-posting. Use within 24h of any published post or on specific-platform re-translation requests. Trigger keywords: distribution, translate, LinkedIn post, Substack Note, cross-post, social distribution, amplify, X thread optional.
Two-pass review (structural + voice) on substacker drafts. Produces marked-up critique, never a replacement draft. Flags voice-don'ts (delve, unpack, paradigm shift, generic opener), catches hedges that weaken rather than specify, blocks AI-explainer slop, verifies opener/closer/analogy-weight/rhythm/citation-form/section-breaks. Loads global voice-profile + per-section voice overlay. Use before publishing any draft. Trigger keywords: edit, review, voice check, structural pass, critique, pre-publish, does this sound like me, slop check.
An orchestrating agent that collaboratively helps ML engineers apply group theory and symmetry principles to neural network design. Guides users through symmetry discovery, validation, group identification, equivariant architecture design, and model verification. Use when user mentions symmetry, invariance, equivariance, group theory in ML, or geometric deep learning.
An orchestrating agent that collaboratively helps engineers build graph-based retrieval-augmented generation systems. Guides users through knowledge graph construction, embedding strategy design, retrieval orchestration, system integration, and evaluation. Use when user mentions knowledge graph, GraphRAG, graph retrieval, entity extraction for RAG, Neo4j with LLM, or building graph-augmented AI systems.
Weekly Substack performance analyzer for substacker. Pulls stats directly from the Substack dashboard via Claude-in-Chrome browser automation (primary) or ingests a manual CSV export (fallback). Computes rolling 4-week baseline, attributes over/under-performance in plain English with calibrated confidence, tracks per-section metrics once sections exist, produces a 400-800 word weekly report. Feeds Curator (section pruning) and Growth Strategist (quarterly rollups). Use Monday mornings. Trigger keywords: growth, stats, weekly report, Substack analytics, subscribers, open rate, per-section performance, Chrome stats, auto fetch.
Quarterly strategic advisor for substacker. Synthesizes the corpus, Curator's section map, Growth Analyst's rolled-up weekly data, and writer's stated goals into a 1500-3000 word review. Surfaces three uncomfortable questions (evidence + reasoning + downside), assesses section portfolio, proposes goal diffs, names one bet for the next quarter, identifies kill list. Rolling 13-week windows, NOT calendar quarters. Use quarterly or on-demand when a specific strategic decision is pending (e.g., "should I launch paid?"). Trigger keywords: quarterly, strategic review, zoomout, uncomfortable questions, paid tier, product hiding, kill list, goals reset.
Continuous vigilance specialist for a household finance pipeline. Watches for missed bills, late recurring payments, duplicate charges within 48 hours, statistical anomalies (>3σ above merchant historical average), first-time large merchants, unexpected fees, and identity-theft signals. Produces severity-tagged alert files with transaction ids, evidence, and recommended actions (call bank, freeze card, dispute, monitor, negotiate fee). Use after every drop, when checking for anomalies post-statement, or when the user reports a suspicious charge.
Meticulous bookkeeper for a household finance pipeline. Takes a manifest produced by the intake classifier and the underlying PDFs, extracts every transaction or holding, normalizes amounts to integer cents and dates to ISO 8601, deduplicates against the existing transaction store, categorizes via taxonomy rules + LLM fallback, reconciles statement math, and writes append-only updates to transactions.json, balances.json, investments.json, retirement.json, hsa.json, and mortgage.json. Halts and flags rather than committing dirty data when reconciliation fails. Use after intake completes, when reprocessing a single statement, or when correcting a categorization batch.
Master orchestrator and synthesizer for the household finance team. Runs the per-drop pipeline (intake → bookkeeper → spending/vigilance/savings/investments/tax in parallel → CFO synthesis), the monthly briefing pipeline, the weekly dashboard generation, and the on-demand chat mode where the user asks ad-hoc finance questions. Produces the one-page monthly briefing the household actually reads — net-worth snapshot, four numbers (income / spend / savings rate / Δ net worth), three wins, three issues with owners and actions, goals dashboard, and a 30-day look-ahead. Always grounds in data, never executes financial actions. Use as the entry point for any household finance interaction — drops, monthly briefs, weekly dashboards, or ad-hoc questions.
Generates the weekly static HTML dashboard for the household finance JSON store. Reads the canonical data files, applies cognitive-design principles for visual hierarchy and visual-storytelling-design for narrative annotations, runs cognitive-fallacies-guard to prevent visual misleads, and produces a single self-contained HTML file with six panels (net worth over time, cash flow, recurring & subscriptions, goals, asset allocation, vigilance feed). Output is a static file — no server, no live data binding — that opens in any browser. Use every Sunday for the weekly dashboard, after a per-drop pipeline completes, or when the user asks for a fresh dashboard.
Document intake specialist for household finance pipelines. Classifies every PDF in a drop folder by document type (checking, savings, credit card, brokerage, 401k, HSA, mortgage, tax form, insurance), matches each against existing accounts by institution + mask, proposes new account records when no match exists, archives PDFs by statement period (not drop date), and produces a manifest.json for downstream agents. Use as the first stage of any per-drop pipeline, when adding a new batch of bank or brokerage statements, or when classifying a single PDF.
Tracks and advises on the long-horizon portfolio — taxable brokerage, 401k, and HSA invested portion — by aggregating asset allocation across all three, computing drift versus target, optimizing contributions (employer match, annual limits, HSA investment threshold), scanning for tax-loss-harvest candidates with wash-sale awareness, and producing rebalance proposals that prefer tax-advantaged accounts. Runs a quarterly retirement projection. Never executes trades. Use as the investment phase of the monthly briefing, the quarterly retirement projection, after market drawdowns, or when the user asks for a portfolio review.
Manages the household short-to-medium-horizon balance sheet — emergency fund, sinking funds, mortgage, any consumer debt, and credit-card rewards optimization. Tracks goal progress and pacing, sizes the emergency fund against essential expenses, ranks debts by APR for avalanche payoff, models mortgage prepayment vs market return, identifies bills due for renegotiation, and surfaces every recommendation with the dollar impact attached. Use as the savings-debt phase of the monthly briefing, when revisiting goals quarterly, or when the user asks whether to pay down the mortgage early.
Household spending analyst that answers, every month, where the money went, what changed versus prior months and same-month-last-year, what is recurring, what is an outlier, and what is coming up. Produces the Spending section of the monthly briefing, runs a quarterly subscription audit, projects 60-day daily cash flow per account, and maintains a 12-month forward seasonal calendar. Use as the third phase of a per-drop or monthly pipeline, when the user asks for a spending review, or when sizing pre-funding for upcoming events.
Tracks tax-relevant events year-round so April is a non-event. Maintains the expected-document checklist (W-2, 1099-DIV/INT/B, 1098, 5498-SA, 1095), accumulates deduction candidates from transactions, manages the HSA receipt vault for deferred reimbursement, surfaces tax-aware moves throughout the year (TLH, Roth conversion windows, charitable bunching, backdoor/mega-backdoor Roth eligibility), validates estimated tax payments against safe-harbor rules, and assembles a year-end packet for the household's CPA or tax software. Does not file. Does not give legal advice. Use quarterly, before estimated-tax deadlines, in Q4 for tax-loss-harvest planning, and in December for year-end packet assembly.
Generates 5 distinct intuitive framings for a technical topic the writer wants to explain — everyday analogy, physical metaphor, contrarian, historical, counterfactual. Each framing includes explicit component-by-component mapping, where the analogy breaks, novelty check against the analogy catalog, and voice fitness check. Produces seeds the writer picks from, never finished drafts. Use when the writer asks for framings for a topic, wants candidate analogies, or mentions "intuition", "analogy", "metaphor", "framings", "explain X intuitively".
Guides private-to-public transition valuation and IPO pricing strategy. Transitions from total beta to market beta, removes illiquidity discount, uses public comparable multiples for pricing, and optimizes capital structure for public markets. Produces pre-IPO valuation, post-IPO fair value, and recommended pricing range. Use when planning an IPO, pricing a public offering, transitioning from private to public valuation, or when user mentions IPO valuation, IPO pricing, going public, or pre-IPO vs post-IPO value.
Ingests and indexes the writer's Substack corpus. Watches the substacker inbox/ for new material (markdown notes, transcripts, Claude conversation exports, Readwise highlights), normalizes format, tags by topic and intuition density, dedupes against existing corpus, maintains the topic-ledger, and sweeps stale seeds. Use at session start, when the user drops raw material into inbox/, when asking "what have I written about X", or when user mentions ingest, library, corpus, seeds, indexing, topic ledger, dedup, intuition density.
Plans a weekly H2H Categories matchup strategy — which of the 10 cats (R/HR/RBI/SB/OBP, K/ERA/WHIP/QS/SV) to push, which to maintain, which to punt. Fires advocate (Balancer) + critic (Puncher) variants per matchup. Drives waiver/streaming/lineup priorities for the week. Use on Monday mornings or when planning a weekly matchup strategy.
Orchestrates a multi-agent team for Yahoo Fantasy Baseball management. Spawns specialists (lineup, waiver, streaming, trade, category, playoff) each in advocate + critic variants, runs dialectical-mapping-steelmanning synthesis, deliberation-debate-red-teaming stress tests, and produces plain-English morning briefs for a user with zero baseball knowledge. Use when running morning brief, weekly kickoff, evaluating trades, or any Yahoo Fantasy Baseball decision for the user's league.
Picks today's Yahoo Fantasy Baseball lineup for a 26-roster H2H Categories league with OBP/QS scoring. Fires in two variants — advocate (steelmans each start) and critic (red-teams each start) — then synthesizes per slot. Emits daily_quality signals and writes a START/SIT decision per active roster position. Use for daily lineup optimization, start/sit calls, platoon decisions, or daily Yahoo Fantasy Baseball management.
Plans Yahoo Fantasy Baseball playoff pushes for weeks 21-23 (ending Sep 6). From July onward, identifies players with most games and best matchups in playoff weeks, suggests trade-deadline (Aug 6) targets, evaluates IL stashes. Fires advocate (Aggressor) + critic (Stabilizer) variants. Use after July 1 for playoff positioning, trade-deadline planning, or late-season roster moves.
Plans weekly pitching moves for a Yahoo Fantasy Baseball H2H Categories league with QS/K/ERA/WHIP/SV scoring (no wins). Identifies two-start SPs, favorable spot starts, and rostered SPs to bench on bad matchups. Fires advocate (Stream) + critic (Hold) variants per candidate. Use for weekly pitching strategy, two-start pitcher targeting, spot-start streaming, or K-chasing plans.
Evaluates Yahoo Fantasy Baseball trade offers. Runs advocate (Acceptor) + critic (Rejecter) variants, computes per-category deltas for both sides across all 10 categories (R/HR/RBI/SB/OBP, K/ERA/WHIP/QS/SV), factors in regression, positional scarcity, and playoff schedule. Verdict is ACCEPT, COUNTER (with specific counter), or REJECT. Use when a trade offer arrives, evaluating potential trade, or assessing a 2-for-1 or consolidation move.
Scans Yahoo Fantasy Baseball free agents for weekly waiver claims (FAAB league, $100 season budget). Fires in advocate (Buy) and critic (Pass) variants, synthesizes, and produces ranked ADD + BID $X recommendations with drop candidates. Use for weekly waiver priority review, FAAB bid sizing, prospect call-ups, closer-committee speculation, or injury replacement.
An orchestrating agent for scientific writing that routes requests to specialized skills for manuscripts, grants, letters, emails, career documents, and cross-cutting clarity review. Provides multi-pass editing following structured workflows with document-type-specific frameworks. Use when user needs help with scientific or academic writing.
Handles edge-case valuations for companies that break standard DCF assumptions. Covers four situation types: high-growth firms with negative earnings (revenue-based DCF with failure adjustment), distressed firms (equity-as-call-option via Black-Scholes), private companies (total beta and liquidity discount), and financial services firms (excess return model). Use when valuing unprofitable startups, distressed companies, private firms, banks, insurance companies, or companies with negative earnings.
An elite forecasting agent that orchestrates reference class forecasting, Fermi decomposition, Bayesian updating, premortems, and bias checking. Adheres to the "Outside View First" principle and generates granular, calibrated probabilities with comprehensive analysis. Use when user asks for forecast, prediction, or probability estimate.
Pre-publish ML/systems claim check for substacker drafts. Extracts atomic technical claims from prose, classifies each as simplified-correct / simplified-boundary / wrong / contested / overclaim, cross-references primary sources (arXiv, RFC, textbooks — never blog posts), flags boundary-breaks as teaching opportunities to fold into the post. Never rewrites the draft. 48h research cap per draft. Use after Editor approves voice and before the writer publishes a post that makes technical claims (especially Agent Workshop posts). Trigger keywords: technical review, fact-check, claim check, ML claim, simplified vs wrong, boundary break, cross-reference paper.
Weekly ML/systems signal radar for substacker. Scans ~30-50 curated intuition-first sources (Olah, Weng, Karpathy, Jay Alammar, Raschka, Willison, Transformer Circuits, Interconnects, arXiv cs.LG, Hugging Face papers, etc.) for signal-not-noise items, cross-references against topic-ledger, produces a lean digest of ≤10 items. Weekly Friday evening. Not daily. Trigger keywords: trends, ML news, weekly digest, signal, watchlist, external signal, what's new in ML.
An orchestrating agent for writing that routes requests to specialized skills for structure planning, revision, stickiness enhancement, and pre-publishing checks. Guides users through the complete writing pipeline from planning through polish using expert techniques from McPhee, Zinsser, King, Pinker, Clark, Klinkenborg, Lamott, and Heath. Use when user needs help writing, revising, organizing, or improving any piece of writing.
Builds structured abstraction ladders that translate high-level principles into concrete, actionable examples across 3-5 levels. Bridges communication gaps, reveals hidden assumptions, and tests whether abstract ideas work in practice. Use when explaining concepts at different expertise levels, moving between abstract principles and concrete implementation, identifying edge cases by testing ideas against scenarios, designing layered documentation, decomposing complex problems into actionable steps, or bridging strategy-execution gaps.
Guides the creation of evidence-based academic recommendation letters, reference letters, and award nominations that combine concrete examples, meaningful comparisons, and genuine enthusiasm. Use when writing recommendation letters for students, postdocs, or colleagues, or when user mentions recommendation letter, reference, nomination, letter of support, endorsement, or needs help with strong advocacy and comparative statements.
Documents significant architectural and technical decisions with full context, alternatives considered, trade-offs analyzed, and consequences understood. Creates a decision trail that helps teams understand why decisions were made. Use when choosing between technology options, making infrastructure decisions, establishing standards, migrating systems, or when user mentions ADR, architecture decision, technical decision record, or decision documentation.
Produces a Bayesian prior probability that an offered transaction is +EV for the recipient, given that the counterparty chose to propose it. Applies Akerlof market-for-lemons logic -- if they offered it, they believe it is +EV for them, so the prior that it is +EV for us is materially below 50%. Reusable across trade evaluation, waiver drops (another team dropping a player is also adverse selection), job-offer analysis, M&A, and any "someone offered me this" situation. Use when you receive an unsolicited trade/offer/proposal, analyzing incoming trade prior, evaluating why a counterparty proposed a deal, or when user mentions adverse selection, market for lemons, why did they offer this, incoming trade prior, they proposed it, Bayesian adjustment on received offer.
Creates actionable alignment frameworks that give teams a shared North Star (direction), values (guardrails), and decision tenets (behavioral standards). Enables autonomous decision-making while maintaining organizational coherence. Use when starting new teams, scaling organizations, defining culture, establishing product vision, resolving misalignment, creating strategic clarity, or when user mentions North Star, team values, mission, principles, guardrails, decision framework, or cultural alignment.
For every analogy in a substacker draft, verifies it carries mechanical weight — the analogy does real work explaining the mechanism, not merely decorates it. Cross-references analogy-catalog.md for novelty (is this analogy reused from a prior post?) and domain fit (biology > organizational > sports preferred; physics/military disfavored). Use whenever an analogy appears in the draft. Trigger keywords: analogy weight, decorative, mechanical weight, reused analogy, catalog check, metaphor check.
Scans transactions for fraud and anomaly signals — duplicate charges within 48 hours, transactions more than 3 standard deviations above a merchant's historical average, first-ever transaction with a new merchant above a high-dollar threshold, and unusual geography or time. Produces severity-tagged alerts with the transaction id, evidence, and a recommended action (call bank, freeze card, dispute, monitor). Use for vigilance scans on every drop, after any large unexplained outflow, or when user mentions fraud check, suspicious charge, anomaly detection, or duplicate charge.
Takes one strategic question about substacker ("should we launch paid?", "is this section dead?", "are we writing for the wrong audience?") and produces the mandatory evidence + reasoning + downside triad plus a recommendation. Used 3 times per Growth Strategist review. Trigger keywords: uncomfortable question, strategic question, evidence reasoning downside, triad.
For each substacker post that materially over- or under-performs the rolling baseline (|z| ≥ 1.0), produces a plain-English attribution paragraph with calibrated confidence (high / medium / low / unexplained). Considers subject-line effect, topic zeitgeist, external share, day-of-week, length effect, and audience-notes signals. Labels unexplained outliers explicitly rather than fabricating a story. Use after compute-baseline when outlier posts exist. Trigger keywords: attribution, why did this post work, outlier explanation, performance analysis.
Computes the optimal shaded bid for a first-price sealed-bid auction given a true private value, an estimate of the number of competing bidders N, and a value-distribution assumption. Implements the `(N-1)/N` equilibrium shading rule for uniform private values, adjusts for log-normal or empirical value distributions, layers a risk-aversion adjustment, and caps output against the bidder's remaining budget. Domain-neutral auction theory reusable across fantasy sports (baseball FAAB, NBA/NHL waiver auctions), prediction-market limit sizing, sealed procurement bids, and any blind-bid context. Use when user mentions "first-price auction bid", "sealed bid shading", "(N-1)/N", "FAAB bid amount", "auction shading", "optimal bid first-price", "bid for sealed-bid", "blind bid sizing", or when downstream logic needs a principled shade factor rather than an ad-hoc heuristic.
Applies a Bayesian haircut to a bid valuation for common-value auctions where winning is itself evidence the bidder over-estimated. Takes a raw valuation, a value-type classification (common_value / private_value / mixed), the number of informed bidders N, and a signal-dispersion estimate, and returns an adjusted valuation. Domain-neutral and reusable across fantasy FAAB, prediction markets, M&A bids, ad-auction budgets, and any generic bidding context. Use when user mentions "winner's curse", "common value auction", "valuation haircut", "adverse valuation", "Bayesian bid adjustment", or "over-paying in auction".
Checks every post currently assigned to a substacker section against that section's promise and flags posts that no longer fit. Distinguishes acceptable-stretch (minor) from borderline (surface for review) from genuine-drift (violates promise). Never reassigns automatically — only flags. Use on every Curator run where at least one section already exists. Trigger keywords: drift, drift audit, section fit, promise violation, post in wrong section.
Applies Bayesian reasoning to systematically update probability estimates with new evidence, helping make better forecasts and avoid overconfidence. Use when making predictions or judgments under uncertainty, forecasting outcomes, evaluating probabilities, testing hypotheses, calibrating confidence, assessing risks with uncertain data, or when user mentions priors, likelihoods, Bayes theorem, probability updates, forecasting, calibration, or belief revision.
Applies structured divergent-convergent thinking to generate many creative options, organize them into meaningful clusters, then systematically evaluate and narrow to the strongest choices. Balances creative exploration with disciplined decision-making. Use when exploring product ideas, solving open-ended problems, generating strategic alternatives, developing research questions, designing experiments, or when user mentions brainstorming, ideation, divergent thinking, generating options, or evaluating alternatives.
Constructs a structured narrative linking a company's qualitative business story to quantitative valuation drivers (revenue growth, target margin, reinvestment efficiency, cost of capital, failure risk). Classifies the company within a 6-stage corporate life cycle and sizes the total addressable market. Use when starting a company analysis, building a valuation narrative, assessing competitive position, sizing TAM, or when user mentions business narrative, story to numbers, life cycle stage, or company analysis.
Analyzes a company's debt-equity mix and determines the optimal capital structure that minimizes WACC. Computes WACC at each debt ratio from 0% to 90%, identifies the minimum, and recommends whether to add or reduce debt with specific debt type matching (maturity, currency, fixed vs floating). Use when analyzing capital structure, optimizing debt levels, evaluating leverage, or when user mentions optimal debt ratio, capital structure, leverage optimization, debt capacity, or recapitalization.
Guides the creation of career documents for academic advancement including research statements, teaching statements, diversity statements, CVs, and biosketches. Combines strategic positioning, narrative coherence, and institutional alignment with authentic representation of contributions and vision. Use when writing or reviewing faculty applications, fellowship statements, promotion packages, award applications, or when user mentions research statement, teaching philosophy, diversity statement, biosketch, academic CV, or career narrative.
Projects 30-90 days of daily cash flow per cash account by combining current balance, scheduled recurring inflows and outflows from recurring.json, and a 6-month rolling average of discretionary spend, then flags days where projected balance falls below a configurable safety floor. Use for forward planning, detecting upcoming overdrafts, sizing pre-funding for sinking funds, or when user mentions cash flow projection, runway, balance forecast, or upcoming bills coverage.
Computes the best-response allocation of roster resources across categories in a Head-to-Head Categories matchup. Given our per-category capacity, the opponent's projected output, per-category win probabilities (from matchup-win-probability-sim), and a K-of-N winning threshold, classifies categories into pushed / contested / conceded buckets, emits per-category leverage weights for downstream lineup and streaming decisions, computes the resulting K-of-N win probability, and writes a plain-English rationale. Domain-neutral — portable to any fantasy sport with H2H Cats scoring (MLB 10-cat, NBA 9-cat, NHL 10-cat). Use when you need push/punt decisions, dominated-strategy elimination, leverage weights per cat, or best-response allocation; or when the user mentions "category allocation", "push or punt", "K of N cats", "dominated strategy elimination", "best response allocation", "Blotto fantasy", "leverage weights per cat", or "which cats to push".
For each spending category, computes current-period spend, 6-month rolling average, year-over-year delta, and budget variance, then flags categories that are outliers (>1.5x rolling average or >130% of budget). Produces a ranked list of categories that grew, shrank, or stayed flat, plus the top transactions driving each outlier. Use for monthly spending reviews, identifying lifestyle creep, evaluating budget adherence, or when user mentions category trends, spending changes, budget variance, or outlier spend.
Systematically investigates causal relationships to identify true root causes rather than correlations or symptoms. Distinguishes genuine causation from spurious associations, tests competing explanations, and designs interventions addressing underlying drivers. Use when investigating why something happened, debugging systems, analyzing failures, evaluating policy impacts, or when user mentions root cause, causal chain, confounding, spurious correlation, or asks "why did this really happen?"
Chains estimation, decision analysis, and storytelling to transform uncertain choices into clear, stakeholder-ready recommendations. Quantifies uncertain variables, applies expected value analysis to identify the best option, then packages the analysis into a persuasive narrative. Use when evaluating strategic options (build vs buy, market entry, resource allocation), quantifying tradeoffs, justifying investments, pitching to decision-makers, or when user mentions ROI analysis, expected value, business case, cost-benefit, or needs to combine estimation with persuasive communication.
Facilitates structured roleplay, debate, and synthesis to resolve decisions with multiple legitimate perspectives and inherent tensions. Surfaces assumptions that single-viewpoint analysis would miss and integrates competing priorities into coherent recommendations. Use when stakeholders have competing priorities (growth vs. sustainability, speed vs. quality), need to pressure-test ideas from different angles, explore tradeoffs between incompatible values, or synthesize conflicting expert opinions into coherent strategy.
Chains together clear specifications, proactive risk analysis (premortem/register), and measurable success metrics into a comprehensive planning artifact for high-stakes initiatives. Use when planning migrations, launches, or strategic changes that need implementation roadmaps, risk mitigation, and instrumentation. Invoke when user mentions "plan this migration", "launch strategy", "implementation roadmap", "what could go wrong", "how do we measure success", or when high-impact decisions need comprehensive planning.
Cross-references each proposed analogy in a 5-framing set against substacker shared-context/analogy-catalog.md to flag reuse. Classifies each analogy as new, reused-from-catalog (and which entry), or adjacent-to-catalog (close to an existing entry but not identical). Prevents the writer from recycling "imagine a library" for the twentieth time. Use after generate-analogy-set and before presenting framings to the writer. Trigger keywords: novelty, catalog, analogy reuse, already used, imagine a library.
Checks whether the substacker corpus has enough material to justify a Curator run. Counts published posts, time-gates against the last review, and reports go/no-go with the specific gate that failed. Use before any Curator run, and on cold start to decide whether to propose sections yet. Trigger keywords: readiness, corpus ready, gate check, cadence gate, pre-flight.
Guides cooking through culinary principles, food science, and flavor architecture rather than rote recipe steps. Covers technique teaching (knife skills, sauces, searing, braising), food science (Maillard reaction, emulsions, brining), flavor troubleshooting (salt/acid/fat/heat balance), menu planning, ingredient substitutions, plating, and cultural cuisine exploration. Use when users mention cooking, recipes, chef, cuisine, flavor, technique, plating, food science, seasoning, or culinary questions.
Verifies every paper or named research result cited in a substacker draft uses the inline "Author(s), Institution, Year" form per style-guide, not a bare hyperlink or title-alone reference. Flags bare-hyperlink citations and missing-institution attributions. Use whenever the draft references external research. Trigger keywords: citation, paper citation, bare hyperlink, authors, institution, reference format.
Extracts atomic technical claims from a substacker essay draft, converting flowing intuition-first prose into a numbered list where each item is a statement that could in principle be verified or falsified. Skips non-technical sections (personal anecdote, motivation, call-to-action). Use when the Technical Reviewer starts a per-draft review. Trigger keywords: extract claims, atomic claims, technical claim list, fact-check prep.
Assigns each extracted claim to one of five buckets — simplified-correct, simplified-boundary, wrong, contested, overclaim — with low/medium/high confidence and one-sentence rationale. Classification happens before primary-source verification (which confirms, not invents). Use for every claim from claim-extractor. Trigger keywords: classify, bucket, simplified vs wrong, claim type, technical classification.
Assigns a substacker draft or published post to the best-fitting section (or to unassigned) based on content + section promises in section-map.md. Used by the Editor on every draft review (to load the right voice overlay) and by the Curator in batch mode. Trigger keywords: classify post, section assignment, which section, route post, per-draft section.
Evaluates the final paragraph of a substacker draft for compression and closing form — bolded maxim, forward-looking question, or compressed mechanism statement. For series posts (frontmatter series: {slug}), verifies the running scoreboard (P&L, Brier, W-L) is present and updated. Use on every draft. Blocks publication of series posts missing the scoreboard. Trigger keywords: closer, closing, last paragraph, bolded maxim, scoreboard, CTA, wrap up, conclusion.
Performs axial-coding-style thematic clustering over the substacker corpus of published posts to surface candidate sections. Uses Braun & Clarke's six-phase thematic analysis — familiarization, initial coding, searching for themes, reviewing themes, defining themes, naming. Reads full bodies, not titles. Use when re-opening the section question. Trigger keywords: cluster, theme, axial coding, thematic analysis, candidate sections.
Generates structured scaffolds (frameworks, checklists, templates) for technical work — TDD test suites, exploratory data analysis plans, statistical analysis designs, causal vs predictive modeling objectives, and validation checklists. Use when starting technical work that needs systematic planning before execution. Invoke when user mentions "write tests for", "explore this dataset", "analyze", "model", "validate", "design an A/B test", or when technical work needs scaffolding before execution.
Grounds visual design decisions in cognitive psychology principles — perception, attention, memory, Gestalt grouping, and visual encoding hierarchy — explaining WHY certain designs work. Covers interfaces, data visualizations, educational content, and presentations. Invoke when user mentions cognitive load, visual hierarchy, working memory, preattentive processing, Gestalt principles, encoding hierarchy, or cognitive design pyramid. For design evaluation, use `design-evaluation-audit`. For fallacy prevention, use `cognitive-fallacies-guard`. For data storytelling, use `visual-storytelling-design`.
Detects and prevents visual misleads, cognitive biases, and data integrity violations in visualizations, dashboards, reports, and presentations. Audits charts for honesty, diagnoses misinterpretation causes, and provides specific fixes. Invoke when user mentions chartjunk, misleading chart, truncated axis, data integrity, visual deception, 3D chart problems, cherry-picking data, or needs to audit visualizations for accuracy. For general design evaluation, use `design-evaluation-audit`. For cognitive foundations, use `cognitive-design`.
Transforms analysis, data, and complex information into clear, persuasive narratives tailored to specific audiences — executives, customers, investors, or non-technical stakeholders. Provides story structures (Hero's Journey, Problem-Solution-Benefit, Situation-Complication-Resolution) and audience adaptation techniques. Use when presenting findings, explaining technical concepts to non-technical audiences, writing announcements, or when user mentions "write this for", "explain to", "present findings", "make this compelling", or "audience is".
Computes substacker's rolling 4-week baseline for open rate, click rate, views-per-send, and weekly subscriber delta using corpus/stats/ archived CSVs. Produces per-metric z-scores of the current week against the baseline and flags cold-start windows where fewer than 4 prior weeks exist. Use after ingest-substack-csv each Monday. Trigger keywords: baseline, rolling median, z-score, cold start, per-metric comparison.
Turns limitations into creative fuel by strategically imposing constraints that force novel thinking, break habitual patterns, and reveal unexpected solutions. Covers resource constraints (budget/time/tools), format constraints (length/medium), rule-based constraints, and perspective constraints. Use when brainstorming feels stuck or generates obvious ideas, working with limited resources, designing with specific limitations, or when user mentions "think outside the box", "we're stuck", "same old ideas", "tight constraints", or "limited budget/time".
Computes cost of equity (CAPM), cost of debt (synthetic rating), and weighted average cost of capital (WACC) for any company in any currency. Handles emerging market risk premiums, bottom-up beta estimation, and multi-country operations. Use when estimating discount rates, computing WACC, determining hurdle rates, analyzing cost of equity or debt, or when user mentions cost of capital, WACC, beta, equity risk premium, country risk premium, or discount rate.
Writes a 60-140 word third-person blurb for the Substack cross-post feature, positioned so another newsletter writer can paste it into their cross-post popup without editing. Third-person throughout ("In this piece, Kushal argues…"). No subscriber CTAs. Use as the cross-post arm of the Distribution Translator. Trigger keywords: cross-post, cross-poster, blurb, Substack cross-post, third person, positioning.
For each Trend Scout candidate item, checks substacker shared-context/topic-ledger.md and tags with NEW | OVERLAPS seed:{slug} | OVERLAPS draft:{slug} | OVERLAPS published:{slug}. Adds a reinforcement_angle note for items overlapping with published posts ("external confirmation of X"). Read-only against the ledger. Use after summarize-signal, before rank-by-user-fit. Trigger keywords: cross-ref, ledger check, overlap, reinforcement, dedup external.
Finds and cites a primary source for each substacker draft claim — original arXiv paper, official documentation, RFC, canonical textbook. Source hierarchy enforced: primary > secondary > tertiary > not-a-source. Records URL, title, passage/result that settles the claim. Use after classify-claim; runs once per claim unless classification is simplified-correct with high confidence on standard undergrad material. Trigger keywords: cross-reference, primary source, citation, arXiv, RFC, paper lookup, source hierarchy.
Guides creation of custom, interactive data visualizations with D3.js — bar/line/scatter charts, network diagrams, geographic maps, hierarchies, and real-time data updates with zoom/pan/brush interactions and animated transitions. Use when chart libraries (Highcharts, Chart.js) lack the customization needed and you require low-level control over data-driven DOM manipulation, scales, shapes, and layouts. Invoke when user mentions D3, d3.js, custom visualization, force-directed graph, or data-driven SVG.
Creates rigorous, validated models of entities, relationships, and constraints for database schemas (SQL, NoSQL, graph), knowledge graphs, ontologies, API data models, and taxonomies. Covers relational, document, graph, event/time-series, and dimensional schema patterns with lifecycle modeling, soft deletes, polymorphic associations, and hierarchies. Use when user mentions "schema", "data model", "entities", "relationships", "ontology", "knowledge graph", or when data structures need formalization.
Compares multiple named alternatives against weighted criteria to produce transparent, defensible choices with explicit trade-off analysis. Covers criterion identification, weighting approaches (direct allocation, pairwise comparison, stakeholder averaging), scoring calibration, sensitivity analysis, and group decision facilitation. Use when choosing between vendors/tools/strategies, balancing competing priorities (cost vs quality vs speed), or when user mentions "which option should we choose", "compare alternatives", "evaluate vendors", or "trade-offs".
Breaks complex systems into atomic components, maps their relationships, and reconstructs them in optimized configurations to identify bottlenecks, critical failure points, and redesign opportunities. Use when dealing with complex systems that need simplification, identifying bottlenecks or critical failure points, redesigning architecture or processes for better performance, breaking down problems that feel overwhelming, analyzing dependencies to understand ripple effects, or when optimization requires understanding how parts interact.
Checks a candidate seed against the existing substacker corpus (seeds, drafts, published) for exact duplicates (sha256 fingerprint) and near-duplicates (title Jaccard, first-200-word Jaccard, shared topic cluster). Exact match exits as SKIPPED. Near-match links via related_seeds rather than creating a duplicate. Use after topic tagging and density scoring, before writing the seed. Trigger keywords: dedupe, duplicate, already thought, near-match, related seed, fingerprint.
Challenges plans, designs, and decisions from multiple adversarial perspectives to surface blind spots, hidden assumptions, and vulnerabilities before they cause real damage. Use when testing plans or decisions for blind spots, need adversarial review before launch, validating strategy against worst-case scenarios, building consensus through structured debate, identifying attack vectors or vulnerabilities, or when groupthink or confirmation bias may be hiding risks.
Derives a draft per-section voice overlay (deltas against substacker global voice-profile) once a section reaches ≥3 published posts with shared voice tells. Writes to shared-context/voices/{slug}.md. Writer reviews and commits. Overlay expresses only the DELTA from global voice — not a full rewrite. Use when a section crosses the 3-post threshold. Trigger keywords: voice overlay, section voice, overlay delta, per-section voice.
Systematically evaluates existing designs against cognitive science principles using repeatable checklists, scoring rubrics, and severity-classified fix recommendations. Use when conducting design reviews or critiques, evaluating designs for cognitive alignment, performing quality assurance before launch, diagnosing usability issues, or choosing between design alternatives with objective criteria.
Generates structured experimental designs (factorial, response surface, Taguchi) to systematically discover how multiple factors affect outcomes while minimizing experimental runs. Use when optimizing multi-factor systems with limited experimental budget, screening many variables to find the vital few, discovering interactions between parameters, mapping response surfaces for peak performance, validating robustness to noise factors, or when users mention factorial designs, A/B/n testing, parameter tuning, or process optimization.
Applies thesis-antithesis-synthesis reasoning to escape false binary choices by steelmanning opposing positions, mapping their underlying principles and tradeoffs, and synthesizing principled third-way resolutions. Use when debates are trapped in false dichotomies, polarized positions need charitable interpretation, tradeoffs are obscured by binary framing, synthesis beyond "pick one side" is needed, or when users mention steelman arguments, Hegelian dialectic, or resolving seemingly opposed principles.
Designs structured interview guides, survey instruments, and JTBD probes to learn from users while avoiding common research biases (leading questions, confirmation bias, selection bias). Use when validating product assumptions before building, discovering unmet user needs, understanding customer problems and workflows, testing concepts or positioning, researching target markets, identifying jobs-to-be-done and hiring triggers, or uncovering pain points and workarounds.
Determines how much cash a company can return to shareholders and whether to use dividends, buybacks, or retained earnings. Compares actual cash returns to FCFE capacity, identifies excess cash on the balance sheet, and recommends an optimal return policy. Use when analyzing dividend policy, evaluating share buybacks, assessing cash return capacity, or when user mentions dividend policy, buyback, share repurchase, FCFE, payout ratio, cash return, or excess cash.
Guides clinical and health science research through PICOT question formulation, evidence hierarchy assessment, bias evaluation (Cochrane RoB 2, ROBINS-I), outcome prioritization, and GRADE certainty rating. Use when formulating clinical research questions, evaluating health evidence quality, prioritizing patient-important outcomes, conducting systematic reviews or meta-analyses, creating evidence summaries for guidelines, or assessing regulatory evidence.
Designs embedding strategies that combine semantic (text-based) and structural (graph-based) information at node, edge, path, and subgraph levels for knowledge graphs. Use when selecting embedding granularity, choosing complementary semantic and structural approaches, designing fusion strategies (concatenation, attention, contrastive alignment), or when user mentions node embeddings, embedding fusion, vector representations for graphs, or combining text and graph signals.
Monitors external trends across PESTLE dimensions, detects weak signals of emerging change, develops scenario-based futures, and sets adaptive signposts for early warning. Use when scanning external trends for strategic planning, detecting early indicators of change, planning scenarios for multiple futures, setting signposts and indicators for early warning, or when user mentions environmental scanning, horizon scanning, trend analysis, scenario planning, strategic foresight, or futures thinking.
Designs neural network architectures that respect validated symmetry groups, recommending architecture families (G-CNN, steerable CNN, e3nn), layer patterns, and implementation libraries. Use when you have validated symmetry groups and need equivariant architecture design, or when user mentions equivariant layers, G-CNN, e3nn, steerable networks, or building symmetry into a model.
Decomposes complex unknowns into estimable components to produce rapid order-of-magnitude answers with bounded uncertainty. Use when making quick estimates (market sizing, resource planning, feasibility checks), bounding unknowns with upper/lower limits, sanity-checking strategic assumptions, or when user mentions Fermi estimation, back-of-envelope calculation, order of magnitude, ballpark estimate, or triangulation.
Guides structured identification of potential harms, benefits, and differential impacts across stakeholder groups for decisions affecting people. Covers stakeholder mapping, fairness evaluation, risk mitigation design, and monitoring. Use when decisions could affect groups differently, need to anticipate harms/benefits, assess fairness and safety, identify vulnerable populations, or when user mentions ethical review, impact assessment, differential harm, safety analysis, bias audit, or responsible AI/tech.
Designs structured scoring tools with explicit criteria, performance scales, and descriptors for consistent, transparent quality assessment. Use when need quality criteria and scoring scales to evaluate work consistently, compare alternatives objectively, set acceptance thresholds, reduce subjective bias, or when user mentions rubric, scoring criteria, quality standards, evaluation framework, inter-rater reliability, or grading/assessing work.
Calculates probability-weighted averages of all possible outcomes to enable rational decisions under uncertainty. Covers scenario identification, probability estimation, payoff quantification, and risk-adjusted interpretation. Use when comparing risky options (investments, product bets, strategic choices), prioritizing projects by expected return, assessing whether to take a gamble, or when user mentions expected value, EV calculation, risk-adjusted return, probability-weighted outcomes, or decision tree.
Extracts the 5-7 point argument backbone of a published substacker essay into a structured _spine.json working artifact that downstream platform-rewrite skills consume. Pulls verbatim sentences where possible (not paraphrases). Tags each point with evidence anchor (paper, anecdote, formula, analogy), essay section, and translatability score. Use at the start of a Distribution Translator run. Trigger keywords: spine, backbone, extract claims, thread spine, argument skeleton.
Provides structured formats and techniques for running productive group sessions, from standups to multi-day workshops. Covers format selection, agenda design, participation management, decision methods, and handling difficult dynamics. Use when running meetings, workshops, brainstorms, design sprints, retrospectives, or team decision-making sessions, or when user mentions facilitation, workshop design, meeting patterns, session planning, or effective collaboration.
Uses WebFetch to pull publicly visible subscriber count and per-post public view count from substacker's Substack archive page and individual post URLs. Supplements the CSV when subscriber-count field is stale (>24h old) or when a post has public shares not yet reflected. Rate-limited to ≤10 fetches per invocation. Use when CSV subscribers-end field may have drifted or when external-share attribution needs a public signal. Trigger keywords: public stats, Substack public page, subscriber count check, post views supplement, WebFetch.
Pulls substacker's weekly Substack stats directly from the dashboard via Claude-in-Chrome browser automation. Navigates to substack.com/stats, parses the posts table and subscribers table, and produces the same typed WeekExport object that ingest-substack-csv produces — but without requiring a manual CSV export. The writer keeps Chrome signed in to Substack; this skill opens the dashboard in a new tab, reads the rendered stats, closes the tab. Primary data path for the Growth Analyst; ingest-substack-csv is the fallback when browser automation is unavailable. Trigger keywords: fetch stats, Substack dashboard, auto stats, Chrome stats, dashboard scrape, live stats, no CSV.
Fetches the last 7 days of updates from every entry in the substacker Trend Scout watchlist — blogs, paper aggregators (arXiv, Hugging Face papers), social feeds. Returns normalized {title, url, author, published, excerpt, source_type} tuples. Use at the start of a weekly Trend Scout run. Trigger keywords: watchlist, fetch sources, weekly fetch, last 7 days, source normalization.
Reads and normalizes a company's financial statements to extract clean valuation inputs. Performs accounting adjustments including R&D capitalization, operating lease conversion to debt, stock-based compensation treatment, and one-time item normalization. Computes FCFF, FCFE, and key financial ratios. Use when preparing financials for valuation, cleaning accounting data, computing free cash flows, analyzing financial ratios, or when user mentions financial statements, FCFF, FCFE, ROIC, R&D capitalization, or operating lease adjustment.
Analyzes profitability per customer, product, or transaction to determine business model viability and scalability. Covers CAC, LTV, contribution margin, cohort analysis, and growth-readiness assessment. Use when evaluating business model viability, validating startup metrics (CAC, LTV, payback period), making pricing decisions, comparing business models, or when user mentions unit economics, CAC/LTV ratio, contribution margin, customer profitability, or break-even analysis.
For each simplified-boundary claim, drafts a one-paragraph suggestion for how to acknowledge the boundary inside the post — usually a single sentence or "but" clause — so the break becomes a teaching moment rather than hidden fragility. Runs for exactly the claims classified as simplified-boundary. Skip for all other classifications. Use after cross-reference-claim. Trigger keywords: boundary break, fold break into post, feature not flaw, simplified-boundary.
Combines Pareto prioritization (80/20), timeboxing, and deep work techniques to manage attention, eliminate context-switching, and maximize high-impact output. Use when managing time and attention, combating procrastination, prioritizing high-impact work, planning daily/weekly schedules, or when user mentions timeboxing, Pomodoro, deep work, 80/20 rule, Pareto principle, focus blocks, task batching, or energy management.
Stress-tests predictions by assuming failure and working backward to identify blind spots, tail risks, and overconfidence. Applies Gary Klein's premortem technique to probabilistic forecasting. Use when confidence is high (>80% or <20%), need to identify tail risks and unknown unknowns, want to widen overconfident intervals, or when user mentions premortem, backcasting, what could go wrong, stress test, or black swans.
Generates exactly 5 distinct intuitive framings for a given technical topic — one everyday analogy, one physical metaphor, one contrarian take, one historical angle, one counterfactual. Each framing is a short scaffold (not prose), paired with its archetype and a one-line framing statement. Use when the writer invokes the Intuition Builder agent, as the core generation step before mapping, stress-testing, novelty checking, and voice fitness. Trigger keywords: generate framings, analogies for, give me 5, intuitive angles, framing set.
For each technical term in a substacker draft's claims, checks shared-context/glossary.md for a writer-specific definition and compares to field-standard. Emits a note for terms where the two diverge (aligned / diverged-safe / diverged-risky). Feeds into classify-claim (which definition governs) and write-review-artifact's Glossary Alignment section. Use on every claim. Trigger keywords: glossary alignment, term definition, writer vs field, load-bearing term.
Drafts a proposed diff to substacker shared-context/goals.md showing which lines to add, remove, or change based on the quarter's review. Never writes to goals.md directly — writer applies manually. Used once per Growth Strategist review. Trigger keywords: goal reset, goals diff, update goals, goals proposal, rework goals.
Guides creation and review of competitive grant proposals (NIH R01/R21/K, NSF, foundations) by applying reviewer-perspective thinking to ensure clear hypotheses, compelling significance, genuine innovation, and feasible approaches. Use when writing or reviewing grant proposals, crafting specific aims, drafting significance/innovation/approach sections, or when user mentions R01, R21, K-series, grant writing, proposal review, study section, or fundable hypothesis.
Evaluates GraphRAG systems across knowledge graph completeness, retrieval relevance, answer correctness, reasoning depth, and hallucination prevention. Provides structured evaluation frameworks, metric selection guidance, and testing protocols. Use when evaluating GraphRAG quality, benchmarking multi-step reasoning, measuring hallucination reduction, or when user mentions evaluate GraphRAG, quality metrics, answer correctness, test my GraphRAG, or measure RAG performance.
Designs complete GraphRAG systems integrating graph databases, vector stores, orchestration frameworks, and LLM reasoning. Guides through pattern selection, technology stack decisions, integration pipeline design, and domain-specific customizations. Use when designing GraphRAG systems, choosing technology stacks for graph-augmented retrieval, combining Neo4j with LLM, using LangChain/LlamaIndex knowledge graphs, applying community detection for RAG, building hybrid symbol-vector pipelines, or deploying production or domain-specific GraphRAG.
Classifies every hedge in a substacker draft as either a precision hedge (keep — "n=1 may not replicate", "I do not know") or an epistemic-weakness hedge (flag — "I think", "perhaps", "arguably", "it could be argued"). Only flags weakness hedges; suggests either a commit (remove hedge, take position) or a specific hedge (name the uncertainty). Use when a draft feels wishy-washy or when a cluster of modal verbs appears. Trigger keywords: hedging, I think, perhaps, arguably, uncertainty, weak claim, wishy-washy.
Provides practical frameworks for fast decision-making through mental shortcuts (heuristics) and systematic error prevention through structured checklists. Guides through designing effective heuristics, creating checklists for complex procedures, and recognizing when shortcuts lead to biases. Use when making decisions under time pressure or uncertainty, preventing errors in complex procedures, designing decision rules or checklists, simplifying complex choices, or when user mentions heuristics, rules of thumb, mental models, checklists, error prevention, cognitive biases, satisficing, or standard operating procedures.
Generates 3-5 candidate first-line hooks for a specific platform (Substack Note, X, LinkedIn, cross-post) from a given spine. Uses platform-appropriate hook patterns (confession / claim / question / reframe) and voice-profile constraints. Runs before each platform rewrite so the rewrite skill picks the strongest hook rather than reusing the essay's opener verbatim on every platform. Trigger keywords: hook, opening line, platform hook, first tweet, LinkedIn hook, Substack Note hook.
Generates a single self-contained static HTML dashboard for a household finance JSON store, with embedded D3.js, inlined data, and six standard panels (net worth over time, monthly cash flow, recurring & subscriptions, goals progress, asset allocation, vigilance feed). Applies cognitive-design principles for visual hierarchy and visual-storytelling-design for narrative annotations. Use when emitting a weekly or monthly finance dashboard, or when user mentions weekly dashboard, household dashboard, finance HTML report, or static dashboard.
Maintains a ledger of HSA-qualified medical expenses paid out of pocket (not reimbursed from the HSA), each with date, amount, provider, description, and a path to the scanned receipt. Tracks the running unreimbursed total — the amount of HSA balance the household can pull tax-free at any future date — and validates each candidate against IRS-qualified-expense categories. Use when a new medical receipt arrives, when totaling future tax-free HSA reimbursements, when planning a deferred reimbursement, or when user mentions HSA receipt vault, qualified medical expenses, or HSA shoebox strategy.
Applies "what if" thinking to explore alternative scenarios, test assumptions, understand causal relationships, and prepare for uncertainty. Guides through counterfactual reasoning, scenario exploration, pre-mortem analysis, and stress testing decisions against alternative futures. Use when exploring alternative scenarios, testing assumptions through "what if" questions, understanding causal relationships, conducting pre-mortem analysis, stress testing decisions, or when user mentions counterfactuals, hypothetical scenarios, thought experiments, alternative futures, what-if analysis, or needs to challenge assumptions.
Names what the substacker writer should stop doing — habits with no evidence of use, goals that became theatre, sections with 2 consecutive dormant quarters, and agents in the team whose output the writer ignores. Produces a bulleted list, each item with one sentence of why. Max 4 items. Ordered by ease (easiest first). Used once per Growth Strategist review. Trigger keywords: kill list, stop doing, what to cut, dead habits, dormant, ignored output.
Organizes, structures, and labels content so users can find and manage information effectively. Guides through content audits, card sorting, taxonomy design, navigation structure, and tree testing validation. Use when organizing content for digital products, designing navigation systems, restructuring information hierarchies, improving findability, creating taxonomies or metadata schemas, or when users mention information architecture, IA, sitemap, navigation design, content structure, card sorting, tree testing, taxonomy, or findability.
Ingests a single file from the substacker inbox/ into corpus/seeds/ as a normalized markdown seed with full frontmatter. Orchestrates format normalization, topic tagging, intuition-density scoring, dedupe, changelog, ledger update, and inbox-file move to .processed/. Use when the user drops raw material into inbox/ and runs /ingest, at session start, or whenever a single inbox file needs to become an indexed seed. Trigger keywords: ingest, inbox, new note, new transcript, new highlight, index this, add to corpus.
Loads and validates a weekly Substack CSV stats export for the substacker Growth Analyst. Reconciles header against expected-columns schema, parses post rows + subscriber aggregates, moves file into corpus/stats/ on success, emits schema-warning stub on header drift. Never reads subscriber emails row-by-row — aggregates only. FALLBACK path when fetch-substack-stats (Chrome automation) is unavailable. Use when a CSV appears in inbox/substack-stats/, when Chrome is not logged in, or when the writer prefers manual export. Trigger keywords: CSV, Substack export, stats export, schema validation, subscriber data, manual export, CSV fallback.
Performs discounted cash flow valuation using the appropriate model variant (DDM, FCFE, or FCFF) with configurable growth stages. Produces year-by-year cash flow projections, terminal value, equity bridge (subtract debt, add cash, subtract option value), per-share intrinsic value, and sensitivity analysis. Use when valuing a company intrinsically, building a DCF model, estimating fair value, or when user mentions DCF, discounted cash flow, intrinsic value, terminal value, or free cash flow valuation.
Establishes pre-defined, objective conditions for stopping projects and specific decision points for continue/pivot/kill evaluations. Guides through defining kill criteria, setting go/no-go gates, avoiding sunk cost fallacy, and executing disciplined stopping decisions. Use when defining stopping rules for projects, avoiding sunk cost fallacy, setting objective exit criteria, deciding whether to continue/pivot/kill initiatives, or when users mention kill criteria, exit ramps, stopping rules, go/no-go decisions, project termination, or sunk costs.
Designs and builds knowledge graphs from unstructured or semi-structured data sources. Guides through data model selection (LPG, RDF, hypergraph, temporal), schema design, entity/relation extraction pipelines, and layered architecture construction. Use when designing knowledge graphs, choosing between LPG vs RDF, planning entity extraction, designing graph schemas, aligning ontologies, building a KG for RAG, or when user mentions knowledge graph construction.
Structures thinking across multiple abstraction levels (30,000 ft strategic, 3,000 ft tactical, 300 ft operational) while maintaining consistency between layers. Guides through top-down decomposition, bottom-up aggregation, cross-layer translation, and constraint propagation. Use when reasoning across multiple abstraction levels, designing systems with hierarchical layers, explaining concepts at different depths, maintaining consistency between principles and implementation, or when users mention 30,000-foot view, layered thinking, abstraction levels, top-down design, or strategy-to-execution alignment.
Rewrites a published substacker essay as a LinkedIn post with a hook fitting the 210-char fold, practitioner framing (less confessional than Substack), short 2-3 line paragraphs, and 0-2 niche hashtags. 900-2500 characters. Emits linkedin-post.md. Use as the LinkedIn-native arm of the Distribution Translator. Trigger keywords: LinkedIn post, LinkedIn rewrite, practitioner, professional network, niche hashtags.
Produces an explicit component-by-component mapping from the analogy's source domain to the target technical concept. Rejects vague analogies by forcing each source element to map to a specific target element, and flags unmapped elements as voice-breaking ("it's like a brain" is rejected because "brain" is unmapped). Use after generate-analogy-set, for each of the 5 framings. Trigger keywords: map, component mapping, source target, explicit mapping, what does the X correspond to.
Creates visual maps that make implicit relationships, dependencies, and structures explicit through diagrams, concept maps, and architectural blueprints. Guides through identifying nodes and relationships, choosing visualization approaches, and validating completeness. Use when complex systems need visual documentation, mapping component relationships and dependencies, creating hierarchies or taxonomies, documenting process flows or decision trees, understanding system architectures, visualizing data lineage or knowledge structures, or when user mentions concept maps, system diagrams, dependency mapping, relationship visualization, or architecture blueprints.
Translates beliefs (probabilities) into optimal actions (bet/pass/hedge) using quantitative frameworks including edge calculation, Kelly Criterion bet sizing, forecast extremizing, and Brier score optimization. Use when converting probabilities into decisions, calculating edge against market odds, sizing bets optimally, extremizing aggregated forecasts, improving Brier scores, or when user mentions betting strategy, Kelly Criterion, edge calculation, Brier score, extremizing, or translating belief into action.
Computes P(we win at least K of N categories) for a head-to-head categorical matchup via Monte-Carlo simulation or Poisson-binomial approximation. Domain-neutral — works for any fantasy sport with H2H Categories scoring (MLB, NBA, NHL) or any zero-sum per-category competition. Use when you need matchup_win_probability, per_cat_win_probability, expected_cats_won, or variance_estimate; or when user mentions "matchup win probability", "head to head simulation", "Monte Carlo matchup", "Poisson binomial matchup", "P win 6 of 10", "category matchup simulation", or "weekly win probability".
Creates evidence-based learning plans that maximize long-term retention through spaced repetition, retrieval practice, interleaving, and elaboration. Guides through goal definition, material breakdown, review scheduling, and progress tracking. Use when long-term knowledge retention is needed, studying for exams or certifications, learning new job skills or technology, mastering substantial material, combating forgetting, or when user mentions studying, memorizing, learning plans, spaced repetition, flashcards, active recall, or durable learning.
Transforms vague or unreliable prompts into structured, constraint-aware prompts with explicit roles, task decomposition, output formats, and quality checks. Use when prompts produce inconsistent outputs, need explicit structure and constraints, require safety guardrails, involve multi-step reasoning that needs decomposition, need domain expertise encoding, or when user mentions improving prompts, prompt templates, structured prompts, prompt optimization, reliable AI outputs, or prompt patterns.
Decomposes high-level North Star metrics into actionable sub-metrics and leading indicators, maps causal relationships between metric levels, and identifies high-impact experiments to move key metrics. Use when setting product North Star metrics, decomposing business metrics into drivers, mapping strategy to measurable outcomes, identifying which metrics to move through experimentation, understanding leading vs lagging indicators, prioritizing metric improvement opportunities, or when user mentions metric tree, metric decomposition, North Star metric, KPI breakdown, metric drivers, or how metrics connect.
Converts baseball and fantasy-baseball jargon into plain English for a user with zero baseball knowledge. Wraps every user-facing sentence produced by the MLB agent team (morning briefs, trade recommendations, waiver calls, chat summaries). Detects jargon terms, attaches an inline parenthetical plain-English gloss on first mention in a document, enforces the action-verb ladder (START / SIT / ADD / DROP / BID $X / ACCEPT / COUNTER / REJECT), and rejects assumed-knowledge phrases like "hot streak" or "positive matchup." Use when asked to translate for beginner, explain in plain English, translate this, write without jargon, make it beginner-friendly, or produce any user-facing MLB output for K L D'Souza's Fantasy Baseball 2K25 team.
Computes the weekly category state for a Yahoo H2H Categories matchup across all 10 scoring categories (R, HR, RBI, SB, OBP, K, ERA, WHIP, QS, SV). Pulls current totals from Yahoo, builds rest-of-week per-cat projections from roster + schedule, then DELEGATES matchup/per-cat win-probability math to `matchup-win-probability-sim`. Consumes the sim's `per_cat_win_probability` and `matchup_win_probability` to derive cat_position, cat_pressure, cat_reachability, and cat_punt_score, and emits a "push 6, punt N" plan that drives waiver, streaming, and lineup decisions. Use when user asks about "category state", "where am I winning", "should I punt", "matchup score", "cat pressure", weekly category planning, or which cats to push vs. concede.
Tracks the closer role and bullpen pecking order across all 30 MLB teams — who owns the ninth-inning job today, who is next in line if the current closer falters (the handcuff), and who carries DFA or demotion risk. Emits a per-reliever `save_role_certainty` signal (0-100) and flags speculation-worthy handcuffs for waiver bids. Use when the user mentions "closer", "save role", "handcuff", "ninth inning", "bullpen depth", lost save, blown save, committee, or when the waiver analyst needs to decide whether to spend FAAB on a backup reliever. This league uses SV as one of its five pitcher categories, but SV is also the most volatile and most punt-worthy cat, so tracking should always be paired with a punt-the-cat fallback recommendation.
Appends structured decision entries to the yahoo-mlb decision log (tracker/decisions-log.md) on behalf of any agent in the MLB team. Validates entries against the authoritative schema, serializes concurrent writes from parallel agents, and runs the Monday calibration pass to fill in outcomes and update the variant scoreboard. Use when any MLB agent needs to record a decision, when the coach requests "log decision", "append to decision log", "record variant outcome", or runs the "calibration pass".
Computes FAAB (Free Agent Acquisition Budget) recommended and maximum bids for Yahoo fantasy baseball waiver targets. Implements the baseball-specific layering of the faab-bid-framework (positional_need_fit, role_certainty, urgency, season_pace, league-inflation calibration) and DELEGATES the game-theoretic primitives -- first-price shading and winner's-curse haircut -- to the sibling skills `auction-first-price-shading` and `auction-winners-curse-haircut`. Produces a recommended bid, a hard ceiling, a rationale with the full delegation chain, and guardrail flags. Use when the user asks "how much should I bid on X", mentions FAAB bid, waiver bid amount, blind bid, Yahoo waiver claim sizing, or when mlb-waiver-analyst needs a bid amount for an identified target.
Parses Yahoo Fantasy Baseball league state (roster, standings, current matchup, FAAB remaining, free agents) from authenticated Yahoo team pages via Claude-in-Chrome browser automation, then grounds it against league-config.md and team-profile.md to emit a normalized league-state bundle every other agent can consume without re-scraping. Use when the coach or any downstream agent needs to read Yahoo roster, refresh team profile, pull league state, get current matchup, check FAAB remaining, list free agents, or when user mentions "what's on my roster", "who am I playing this week", "how much FAAB do I have left", or "refresh my team".
Analyzes a single MLB game from a fantasy perspective given home team, away team, and date. Emits structured matchup signals -- opp_sp_quality, park_hitter_factor, park_pitcher_factor, weather_risk, bullpen_state -- and a short narrative of platoon implications (handedness matchup for hitters). Use when preparing daily start/sit calls, evaluating a streaming pitcher's environment, sizing weather risk, or when user mentions matchup analysis, park factor, opposing pitcher, weather risk, or platoon.
Weekly refresh of per-opponent archetype + behavioral profiles for the 11 opposing teams in the user's Yahoo Fantasy Baseball league (ID 23756). Thin baseball-specific wrapper around the domain-neutral `opponent-archetype-classifier` -- provides the 10-archetype MLB taxonomy (balanced, stars_and_scrubs, punt_sv, punt_sb, punt_wins_qs, hitter_heavy, pitcher_heavy, inactive, frustrated_active, unknown), extracts MLB features from Yahoo pages (draft distribution, FAAB spend, waiver pattern, roster composition, lineup consistency, trade activity, recent record, activity recency), invokes the classifier, and writes/updates `context/opponents/<team-slug>.md` files per `opponent-profile-schema.md`. Read-modify-write preserves manual notes. Emits a weekly summary signal at `signals/wkNN-opponent-profiles.md`. Use when user says "opponent profiling", "classify opposing manager", "update opponent profiles", "refresh opponents", "weekly opponent scout", or "MLB fantasy opponent archetype".
Deep-dive analysis of a single MLB player (hitter or pitcher) for the Yahoo Fantasy Baseball 2K25 league. Web-searches FanGraphs (ATC projections), Baseball Savant (xwOBA/xBA/xERA), MLB.com (lineups, probables), RotoWire (weather, injuries), and RotoBaller (closer depth) to produce the full set of structured player signals defined in the signal framework. Emits form_score, matchup_score, opportunity_score, daily_quality, regression_index, obp_contribution, sb_opportunity, role_certainty for hitters and qs_probability, k_ceiling, era_whip_risk, streamability_score, two_start_bonus, save_role_certainty for pitchers. Use when you need to analyze player, compute daily_quality, compute regression index, produce player signals, run a hitter analysis, run a pitcher analysis, or prep start/sit inputs for the lineup optimizer.
Counts MLB games per team during the Yahoo fantasy playoff window (weeks 21, 22, 23 -- Aug 17 through Sep 6, 2026) and grades the quality of each team's opponents. Emits three signals per rostered player -- playoff_games (int, max ~21), playoff_matchup_quality (0-100), holding_value (0-100) -- that drive trade-deadline and playoff-lineup decisions. Use when the user mentions playoff weeks, weeks 21-23, playoff schedule, game count, holding value, or asks whether to keep/trade a player for the playoff run. Pre-July 1 this skill returns "insufficient signal -- too early"; from July 1 onward it fires weekly.
Identifies fantasy baseball players whose surface stats (wOBA, ERA, batting average) are diverging from their underlying Statcast quality (xwOBA, FIP, xBA) — emits a `regression_index` from -100 (very lucky, sell high) to +100 (very unlucky, buy low). Primary signal for buy-low/sell-high decisions on trades and waivers. Use when user mentions "buy low", "sell high", "regression candidate", "lucky", "unlucky", "xwOBA gap", "ERA-FIP gap", "BABIP", "due for regression", or is deciding whether to trade for / trade away a player based on over- or under-performance.
Validates and persists signal files to the yahoo-mlb signals directory. Every MLB skill calls this skill before writing a signal. Enforces the signal-framework.md schema -- required YAML frontmatter fields (type, date, emitted_by, confidence, source_urls), range-checks numeric signals (0-100 unipolar, -100 to +100 bipolar), verifies variant_synthesis metadata, and enforces file naming. On validation failure, does not persist and routes a failure entry to mlb-decision-logger. Use when an agent or skill needs to emit a signal, validate a signal file, write to signals/YYYY-MM-DD-<type>.md, or check signal frontmatter. Triggers: "emit signal", "validate signal", "write signal file", "signal frontmatter".
Computes the full impact of a proposed MLB fantasy trade across all 10 H2H categories (R/HR/RBI/SB/OBP, K/ERA/WHIP/QS/SV), rest-of-season dollar value, positional flexibility, slot-value optionality, adverse-selection prior, and weeks 21-23 playoff impact. Produces a signed verdict (accept / counter / reject) with rationale and a specific counter if applicable. Use when user mentions "trade evaluation", "trade value", "should I accept", "trade delta", "counter offer", or pastes in a trade proposal from Yahoo. Defaults to COUNTER in the middle band — pure REJECT is reserved for clearly predatory offers.
For a given fantasy week (Monday-Sunday), identifies every starting pitcher scheduled to start twice, validates both probable starts, grades each matchup against the league's Quality Starts (QS) scoring rules, and ranks the list by streamability_score. Flags bullpen-game and opener risks that nearly never produce QS. Use when user mentions "two-start pitchers", "weekly streaming", "Monday-Sunday pitcher plan", "double start", "2-start SP", or preparing the weekly streaming plan on Sunday nights.
Verifies that implemented neural network models correctly respect their intended symmetries through systematic equivariance testing, layer-wise isolation, and gradient analysis. Use when testing model equivariance, debugging symmetry bugs, verifying implementation correctness, checking if a model is actually equivariant, or diagnosing why an equivariant model isn't working.
Explores solution spaces systematically through morphological analysis (parameter-option matrices) and resolves technical contradictions using TRIZ inventive principles to generate novel, non-obvious solutions. Use when exploring all feasible design alternatives before prototyping, resolving technical contradictions (speed vs precision, strength vs weight, cost vs quality), generating novel product configurations, finding inventive solutions to engineering problems, identifying patent opportunities, or when user mentions morphological analysis, Zwicky box, TRIZ, inventive principles, systematic innovation, or design space exploration.
Defines concepts, quality criteria, and boundaries by showing what they are NOT -- using anti-goals, near-miss examples, and failure patterns to create crisp decision criteria where positive definitions alone are ambiguous. Use when clarifying fuzzy boundaries, defining quality criteria, teaching by counterexample, preventing common mistakes, setting design guardrails, disambiguating similar concepts, refining requirements through anti-patterns, or when user mentions near-miss examples, anti-goals, what not to do, negative examples, counterexamples, or boundary clarification.
Creates explicit stakeholder alignment through negotiated working agreements, clear decision rights (RACI/DACI/RAPID), and conflict resolution protocols. Use when stakeholders need aligned working agreements, resolving decision authority ambiguity, navigating cross-functional conflicts, establishing governance frameworks, negotiating resource allocation, defining escalation paths, creating team norms, mediating trade-off disputes, or when user mentions stakeholder alignment, decision rights, working agreements, conflict resolution, governance model, or consensus building.
Normalizes a single inbox file of any supported format (plain markdown, Claude.ai JSON export, Claude Code JSONL session, Readwise markdown/CSV highlight, transcript with timestamps or speaker labels, link capture) into a clean markdown body plus partial frontmatter (id, title, source block, word_count). Handles format-specific failure modes — JSON content-block arrays, timestamp stripping, per-highlight chunking, URL-vs-commentary separation. Use when ingesting any inbox item for the substacker Librarian. Trigger keywords: normalize, convert, parse, transcript, export, JSON, JSONL, highlight, CSV.
Creates concise, decision-ready product specifications (one-pagers and PRDs) that align stakeholders on problem, solution, users, success metrics, and constraints. Use when proposing new features/products, documenting product requirements, creating concise specs for stakeholder alignment, pitching initiatives, scoping projects before detailed design, capturing user stories and success metrics, or when user mentions one-pager, PRD, product spec, feature proposal, product requirements, or brief.
Evaluates the first 1-3 sentences of a substacker draft against the writer's signature opener patterns — confession / "I hadn't done X" / reframe / small concrete admission. Classifies opener as confession | reframe | admission | news-hook | generic-opener and flags news-hook/generic as tier-1. The opener sets the voice contract for the essay. Use on every draft. Trigger keywords: opener, hook, first sentence, opening, confession opener, news hook, generic opener.
Classifies an opposing player, manager, or agent into one of a configurable archetype set using Bayesian inference over observed behavior (roster composition, transaction pattern, lineup moves, trade activity). Domain-neutral scaffold -- callers supply the archetype taxonomy (names, priors, characteristic feature distributions) and observed features; the skill returns a normalized posterior, MAP archetype, classification confidence, feature-contribution breakdown, and best-response hints. Use when modeling opponents, classifying player types, performing Bayesian archetype inference, producing opponent posteriors, or when user mentions opponent archetype, classify opponent, Bayesian archetype inference, player type classification, opponent modeling, or archetype posterior.
Evaluates whether substacker has the four preconditions for launching a paid tier — enough subs, healthy engagement, a clear candidate section, writer capacity. Produces readiness score (not-ready / close / ready) with named gaps. Used when the "should we launch paid?" question is selected or at writer's explicit request. Trigger keywords: paid tier, paid readiness, monetization, Substack paid, launch paid, 1000 subscribers.
Checks paragraph rhythm in a substacker draft — long/short mix, one-sentence paragraph at pivots, no walls, avoid monotony. Flags drafts where >3 consecutive paragraphs share the same length bucket, where the pivot lacks a one-sentence paragraph, or where any paragraph exceeds 120 words. Use in the Editor's structural pass. Trigger keywords: rhythm, paragraph length, wall of text, one-sentence paragraph, pivot, monotone.
Parses a single financial statement PDF (checking, savings, credit card, brokerage, 401k, HSA, mortgage, tax form) and emits a normalized JSON record with institution, account mask, statement period, opening/closing balances, line-item transactions or holdings, and a confidence score. Use when extracting structured data from a bank, brokerage, retirement, or HSA PDF statement, when ingesting a drop of household finance documents, or when user mentions parsing a statement, extracting transactions from a PDF, or normalizing statement data.
Breaks down weekly and trailing-4-week substacker performance per Substack section, keyed on section tags in corpus/published/ and section-map.md. Reports opens, clicks, subs-attributable-to-section per section with ≥3 posts. Skips if section-map has <2 sections. Feeds Curator with pruning candidates (sections with 4-week median z ≤ -1.0). Use when section-map has ≥2 live sections. Trigger keywords: per-section, section performance, section metrics, section pruning, differential engagement.
Runs a voice-fidelity audit on each of the four substacker platform outputs (Substack Note, X thread, LinkedIn post, cross-post blurb) before reporting Distribution Translator completion. Checks for voice-don'ts (banned vocabulary, emoji, generic openers, marketing math without source), voice-do compliance (paper attribution preserved, hedges preserved, em-dash reframes present), platform-specific tonal shifts. Emits voice-check.md with pass/fail per artifact. Trigger keywords: platform voice check, voice-check, gate, distribution voice, slop-leak check.
Aggregates investment holdings across taxable brokerage, 401k, and HSA into a single asset-allocation view, computes drift versus a target allocation, and produces a tax-efficient rebalance proposal that prefers tax-advantaged accounts for the trades and never executes. Flags drift over 5 percentage points and free-money issues like missed 401k employer match. Use when reviewing portfolio allocation, planning a rebalance, computing drift, or when user mentions asset allocation, drift, rebalancing proposal, or 401k match.
Creates strategic portfolio roadmaps that size and sequence initiative bets across time horizons (H1/H2/H3), balance risk profiles (core/adjacent/transformational), and set clear exit/scale criteria for disciplined resource allocation. Use when managing multiple initiatives across time horizons, balancing risk vs return across portfolio, sizing and sequencing bets with dependencies, setting exit/scale criteria for experiments, allocating resources across innovation types, or when user mentions portfolio planning, roadmap horizons, betting framework, initiative prioritization, innovation portfolio, or resource allocation across horizons.
Conducts blameless postmortems that transform failures into learning opportunities by documenting timelines, quantifying impact, performing root cause analysis (5 Whys, fishbone diagrams), and defining corrective actions with owners and deadlines. Use when analyzing failures, outages, incidents, or negative outcomes, conducting blameless postmortems, identifying corrective actions, learning from near-misses, establishing prevention strategies, or when user mentions postmortem, incident review, failure analysis, RCA, lessons learned, or after-action review.
Transforms overwhelming backlogs into clear, actionable priorities by mapping items on a 2x2 effort-vs-impact matrix, identifying quick wins (high impact, low effort), big bets, time sinks, and fill-ins. Use when ranking backlogs, deciding what to do first, prioritizing feature roadmaps, triaging bugs or technical debt, allocating resources across initiatives, identifying low-hanging fruit, evaluating strategic options, or when user mentions prioritization, quick wins, effort-impact matrix, high-impact low-effort, big bets, or "what should we do first?".
Scans the substacker published corpus for clusters of posts that could become a product — a course, a book, a cohort, or a consulting offer. Produces at most 2 candidates with evidence + audience signal, or an honest "not yet" verdict if nothing qualifies. Typically fires once the writer has 30+ posts. Trigger keywords: product hiding, course from essays, book from essays, corpus to product, product scan.
Evaluates investment projects using NPV, IRR, and return on capital analysis. Determines whether a project clears its hurdle rate (ROC > WACC), computes economic value added (EVA), and adjusts discount rates for regional or project-specific risk. Use when evaluating capital investments, analyzing project returns, comparing investment alternatives, or when user mentions NPV, IRR, hurdle rate, capital budgeting, project evaluation, EVA, or return on invested capital.
Identifies, assesses, and prioritizes project risks using probability-times-impact scoring (risk matrix), then assigns owners, mitigation plans, contingencies, and triggers to track risk evolution over the project lifecycle. Use when managing project uncertainty, building a risk register, conducting risk assessments, defining risk mitigation plans, or when user mentions risk register, risk management, probability-impact matrix, or asks "what could go wrong with this project?".
Produces the counterfactual framing in an Intuition Builder 5-set — "what if this component were not here?" Reveals the function of a technical element by subtracting it and observing what breaks. Uses Pearl's causal ladder (counterfactual = level 3) as the theoretical spine. Use as the 5th archetype slot of generate-analogy-set, or invoked standalone when the writer wants to build intuition for why a specific element exists. Trigger keywords: counterfactual, what if not, remove, subtract, reveal function, why does this exist.
Converts one candidate cluster from cluster-corpus-by-theme into a named, promised section proposal ready for writer review. Calls write-section-promise for the one-sentence promise. Rates fit confidence (high / medium / low / provisional) and flags borderline posts. Use once per cluster that passes ≥3-post threshold. Trigger keywords: propose section, section proposal, new section candidate.
Guides validation of ideas before full development using pretotyping (fake doors, concierge MVPs, Wizard of Oz) and prototyping at appropriate fidelity (paper, clickable, coded) to test assumptions about demand, pricing, and feasibility. Use when testing ideas cheaply before building, choosing prototype fidelity, running experiments to validate assumptions, or when user mentions prototype, MVP, fake door test, concierge, Wizard of Oz, landing page test, smoke test, or asks "how can we validate this idea before building?".
Synthesizes 13 weeks of substacker Growth Analyst reports + the most recent Curator review + a meta-scan of the published corpus into a 400-700 word narrative that names the quarter's shape — what happened, what changed, what held steady, what surprised. Used by the Growth Strategist at the opening of every review. Trigger keywords: quarterly, zoomout, quarter narrative, rollup, what happened this quarter.
Scores and ranks substacker Trend Scout annotated candidates against voice-profile and goals, producing a top-10 keep list and an explicit drop list with reasons. Weighted-sum scoring across intuition-density fit, goal alignment, dedup penalty, source reliability, freshness. Produces the digest's keeps and drops sections. Use after cross-ref-topic-ledger. Trigger keywords: rank, fit score, user fit, keep list, drop list, signal weight.
Recommends structural cleanups for the substacker section map — sections to retire, sections to merge, posts to reassign. Applies under-filled, stale, and overlapping heuristics. Writes proposals with reasons-to-reject (steelman counter). Does not execute. Use once per Curator run, after drift audit. Trigger keywords: prune, retire section, merge sections, reassign post, cleanup.
Identifies recurring charges (subscriptions, monthly bills, biweekly paychecks) from a transaction history by clustering same-merchant transactions of similar amount on a regular cadence, requiring at least 3 confirming occurrences before promoting a candidate to active status. Detects new recurring charges, dormant subscriptions (missed expected dates), and amount drift, and computes annualized cost. Use when auditing subscriptions, building a recurring bills calendar, computing cash-flow forecast inputs, or when user mentions subscription audit, recurring detection, dormant subscription, or annualized cost.
Anchors predictions in historical reality by identifying a class of similar past events and using their statistical frequency as a baseline (outside view) before analyzing case-specific details. Use when starting a forecast, establishing base rates, testing "this time is different" claims, or when user mentions reference classes, outside view, base rates, or starting a new prediction.
Values a company relative to comparable firms using price multiples (PE, PBV, EV/EBITDA, EV/Sales). Implements the four-step framework (define, describe, analyze, apply) with both simple peer comparison and sector regression approaches. Use when valuing a company relative to peers, analyzing multiples, selecting comparable companies, or when user mentions PE ratio, EV/EBITDA, relative valuation, comparable companies, trading multiples, or price-to-book.
Systematically evaluates claims by triangulating sources, rating evidence quality (primary/secondary/tertiary), assessing source credibility, and reaching confidence-rated conclusions to prevent confirmation bias and reliance on unreliable sources. Use when verifying claims before decisions, fact-checking statements, conducting due diligence, evaluating conflicting evidence, or when user mentions "fact-check", "verify this", "is this true", "evaluate sources", "conflicting evidence", or "due diligence".
Designs retrieval strategies for querying knowledge graphs in RAG systems, covering pattern selection (global-first, local-first, U-shaped hybrid), query decomposition for multi-hop reasoning, ranking and constraint configuration, and provenance tracking for citation. Use when designing retrieval pipelines, orchestrating search over knowledge graphs, or when user mentions retrieval strategy, search orchestration, query decomposition, multi-hop reasoning, provenance tracking, or citation in GraphRAG.
Facilitates structured team reflection through retrospectives, post-mortems, after-action reviews, and weekly/quarterly reviews, producing root cause analysis and SMART action items with psychological safety. Use when conducting sprint retrospectives, project post-mortems, weekly reviews, quarterly reflections, after-action reviews (AARs), or when user mentions "retro", "retrospective", "what went well", "lessons learned", "reflection", or "how can we improve".
Plans backward from a fixed goal or deadline to the present, identifying required milestones, dependencies, critical path, and feasibility constraints to transform aspirational targets into actionable sequenced plans. Use when planning with fixed deadlines, working backward from future goals, mapping critical path, or when user mentions "backcast", "work backward from", "reverse planning", "we need to launch by", "target date is", or "what needs to happen to reach".
Analyzes decisions from multiple stakeholder perspectives (engineering, product, legal, finance, users) to uncover blind spots, surface tensions, and synthesize alignment paths with explicit tradeoffs. Use when stakeholders have conflicting priorities, need to pressure-test proposals, build cross-functional empathy, or when user mentions "what would X think", "stakeholder alignment", "see from their perspective", "blind spots", or "conflicting interests".
Reviews scientific documents for logical clarity, argument soundness, and rigor by auditing hypothesis-data alignment, claim-evidence chains, quantitative precision, hedging calibration, and terminology consistency across any document type. Use when reviewing scientific argumentation, checking claims vs evidence, auditing terminology, or when user mentions check clarity, review logic, scientific soundness, hypothesis-data alignment, or claims vs evidence.
Composes and polishes professional scientific correspondence -- emails to collaborators, journal cover letters, and responses to peer reviewers -- ensuring clear communication, appropriate tone, explicit asks, and professional formatting for academic contexts. Use when writing or polishing scientific emails, cover letters to editors, reviewer responses, or when user mentions email to collaborator, cover letter to journal, reviewer response, or professional scientific correspondence.
Guides systematic multi-pass review and editing of scientific manuscripts (research articles, reviews, perspectives) to improve clarity, structure, scientific rigor, and reader comprehension. Use when reviewing or editing research manuscripts, journal articles, or perspectives, when user mentions manuscript, paper draft, article, research writing, journal submission, reviewer feedback, or needs to improve scientific writing.
Computes a 0-10 intuition-density score for a seed body using 8 concrete measurable signals — analogy presence, concrete worked example, counterfactual offered, reframe against default, biology-to-AI transfer, question posed, calibrated hedge, math-to-metaphor handoff. Emits both the numeric score and the list of triggered signals for auditability. Use after topic tagging to enrich seed frontmatter in the substacker Librarian pipeline. Trigger keywords: intuition density, score, signals, analogy count, worked example, density proxy.
Detects and removes cognitive biases from reasoning using Julia Galef's Scout Mindset framework. Provides reversal tests, scope sensitivity checks, status quo bias tests, confidence interval audits, and full bias audits. Use when a prediction feels emotional, stuck at 50/50, or when validating forecasting process. Use when user mentions scout mindset, soldier mindset, bias check, reversal test, scope sensitivity, or cognitive distortions.
Answers "what have I already thought about X?" by searching the substacker corpus (seeds, drafts, published) for seeds matching a topic, keyword, analogy, or author. Returns a ranked list of seeds with id, title, status, density score, and a one-line excerpt. Use when another agent (Intuition Builder, Editor) needs prior thinking before generating new material, or when the writer asks "have I written about X." Trigger keywords: search, find, what have I, already thought, prior work, precedent, have I written about.
Verifies the section-break style in a substacker draft matches the post register — asterisks (* * *) for essayistic posts under 2500 words, H2 for methodology / how-to / technical posts. Flags mixed registers (H2 in a reflective essay, asterisks in a structured how-to). Per the style-guide rhythm rule. Use every draft. Trigger keywords: section break, asterisk, H2, headers, register, essayistic vs methodology.
Classifies each substacker section as healthy / drifting / candidate-for-prune based on post volume, engagement trend, and niche alignment. Produces table + 2-4 paragraph narrative. Used in every quarterly review. Trigger keywords: portfolio, section health, healthy drifting prune, section assessment, which section is carrying.
Systematically identifies vulnerabilities, threats, and mitigations for systems handling sensitive data using STRIDE methodology, trust boundary mapping, and defense-in-depth principles. Use when designing or reviewing systems with PII/PHI/financial/auth data, building security-sensitive features (auth, payments, file uploads, APIs), preparing for audits or compliance (PCI, HIPAA, SOC 2), investigating incidents, or integrating third-party services. Use when user mentions threat model, STRIDE, trust boundaries, attack surface, or security review.
Transforms documents containing theoretical knowledge or frameworks (PDFs, markdown, book notes, research papers, methodology guides) into actionable, reusable Claude Code skills using systematic reading methodology. Use when user mentions "create a skill from this document", "turn this into a skill", "extract a skill from this file", or when analyzing documents with methodologies, frameworks, or processes that could be made actionable.
Scans a substacker draft for 10 signatures of AI-generated explainer slop — meta-framing openers ("In this post"), list-heavy argument, nominalization clusters, generic examples lacking first-person texture, prompt-residue phrases ("Let's break this down"), buzzword stuffing, outline-shaped paragraphs, hedge clusters, flattened uncertainty. Use when a draft "feels generic" even after voice-check passes. Trigger keywords: slop, AI-written, generic, template, meta-framing, zombie nouns, prompt residue, outline-shaped.
Guides learners to discover knowledge through strategic Socratic questioning and progressive scaffolding removal. Combines question ladders, misconception detectors, Feynman explanations, and worked-example fading to build durable understanding. Use when teaching complex concepts, correcting misconceptions, onboarding team members, mentoring problem-solving, or designing self-paced learning. Use when user mentions "teach me", "help me understand", "explain like I'm", "learning path", "guided discovery", or "Socratic method".
Adapts the standard DCF framework for companies that break normal valuation assumptions. Handles four sub-frameworks: high-growth firms with negative earnings (revenue-based approach with failure probability), distressed firms (equity-as-call-option via Black-Scholes), private companies (total beta and liquidity discount), and financial services firms (excess return model on book equity). Use when valuing unprofitable startups, distressed companies, private firms, banks, insurance companies, or when user mentions negative earnings valuation, distress valuation, private company discount, equity as call option, total beta, liquidity discount, excess return model, or financial services valuation.
Provides frameworks for mapping stakeholder influence networks, designing team structures aligned with system architecture (Conway's Law), defining team interface contracts (APIs, SLAs, decision rights), and assessing capability maturity (DORA, CMMC, agile models). Use when designing org structure or team topologies, mapping stakeholders for change initiatives, defining team interfaces, assessing capability maturity, planning restructures, or when user mentions org design, team structure, stakeholder map, Conway's Law, or RACI.
Verifies that an extracted statement is internally consistent by checking that opening_balance + sum(transactions) = closing_balance within a small tolerance, and produces a reconciliation report flagging missing rows, double-counted rows, sign errors, and rounding diffs. Use as the gate between PDF extraction and committing transactions to the data store, when reconciling a statement that does not balance, or when user mentions reconcile, balance check, or statement does not tie out.
Develops business strategies grounded in rigorous competitive and market analysis using proven frameworks (Good Strategy kernel, Porter's 5 Forces, SWOT, Blue Ocean, Playing to Win, Value Chain Analysis, BCG Matrix). Use when developing strategy (market entry, product launch, expansion, M&A, turnaround), conducting competitive analysis, making strategic decisions (build vs buy, pricing, positioning), planning strategic initiatives, or when user mentions strategy, competitive analysis, Porter's 5 Forces, SWOT, market positioning, or strategic frameworks.
Stress-tests a proposed analogy by finding the edge where the mapping breaks, then frames that break as a teaching opportunity the writer can fold into the post. Every analogy has a boundary; the writer's style treats that boundary as a feature. Use after generate-analogy-set and map-analogy-to-concept, for each framing. Trigger keywords: where does it break, stress-test, boundary, edge case, fold the break, analogy limits.
Performs pass-1 structural review of a substacker essay draft — argument flow, out-of-order moves, buried topic sentences, missing pivots, weak signposting, paragraph-logic issues. Emits the "Argument flow" and "Structural blockers" sections of the Editor artifact. Use when reviewing a draft's macro-structure before addressing voice, when a draft feels like it meanders, or when the user asks whether the argument lands. Trigger keywords: structure, argument flow, outline, signposting, meandering, pivot, macro edit, substantive edit.
Rewrites a published substacker essay as a Substack Note using the extracted spine and chosen hook. Closest voice to the essay. Bolded maxim closer. Single link line. 60-180 words. Emits substack-note.md in the post's distribution folder. Use as the Substack-native arm of the Distribution Translator. Trigger keywords: Substack Note, note rewrite, note post, tease, Notes feed.
Given a candidate item from the substacker Trend Scout fetch, WebFetches the full post or arXiv abstract and produces a one-line "teaches X" summary plus signal_type classification (mechanism / empirical / tool / opinion / announcement / benchmark). Distinguishes teaching-content from capability-announcement explicitly. Use during the weekly run, after fetching and before ranking. Trigger keywords: summarize, signal type, mechanism vs announcement, teaching content.
Identifies substacker seeds older than 30 days with status=seed and no incoming related_seeds links, flags them for writer review, and recommends keep / promote-to-draft / kill based on density score. Does NOT auto-execute any action. Emits a review list to ops/librarian/YYYY-MM-DD-stale-sweep.md. Run at session start after ingest, once per day max. Trigger keywords: stale, sweep, review, old seeds, cleanup, gardener, corpus hygiene.
Guides collaborative discovery of hidden symmetries in ML data through structured domain analysis, coordinate system examination, transformation testing, and physical constraint identification. No group theory knowledge required. Use when ML engineers need to identify symmetries in their data, when user mentions data symmetry, invariance discovery, what transformations matter, or needs help recognizing patterns their model should respect.
Maps identified symmetries to mathematical groups (cyclic, dihedral, symmetric, SO(3), SE(3), E(3)) for equivariant neural network architecture design, using taxonomy and foundations from Visual Group Theory. Use when candidate symmetries have been identified and need formalization into group theory language, or when user mentions cyclic groups, dihedral groups, Lie groups, SO(3), SE(3), or permutation groups.
Provides empirical test protocols and metrics to validate whether hypothesized symmetries actually hold in data or models before committing to equivariant architecture. Includes invariance/equivariance testing, group structure verification, and distribution analysis under transforms. Use when testing invariance, validating equivariance, checking symmetry assumptions, debugging symmetry-related model failures, or needing data-driven validation before architecture decisions.
Synthesizes information from multiple sources into coherent insights and applies analogical reasoning to transfer knowledge across domains. Use when conducting literature reviews, integrating stakeholder feedback, reconciling conflicting viewpoints, identifying cross-source patterns, creating explanatory analogies ("X is like Y"), finding creative solutions through cross-domain transfer, or testing whether analogies hold (surface vs deep). Use when user mentions "synthesize", "combine sources", "analogy", "similar to", "transfer from", "integrate findings".
Finds high-leverage intervention points in complex systems by mapping feedback loops, identifying system archetypes (fixes that fail, shifting the burden, tragedy of the commons, limits to growth), and ranking interventions by Meadows' leverage hierarchy. Use when problems involve interconnected components with feedback loops, delays, or emergent behavior; when past solutions failed or caused unintended consequences; when identifying where to push for maximum effect; or when user mentions systems thinking, leverage points, feedback loops, causal loop diagrams, stocks and flows, or complex systems.
Assigns 1-4 topic tags to a seed body from the controlled vocabulary in substacker shared-context/topic-ledger.md. Prevents tag sprawl at small-corpus scale by requiring existing-tag match or logged addition. Uses keyword + title match; logs near-miss candidates to pending-tags. Use after format normalization and before dedupe. Trigger keywords: tag, topics, categorize, classify, taxonomy, controlled vocabulary, topic ledger.
Scans a taxable brokerage account for individual lots with unrealized losses above a configurable threshold, identifies wash-sale risks by checking recent buys and forward planned buys of substantially identical securities (across all household accounts including spousal), and proposes harvest pairs (sell-for-loss + immediate buy of a similar-but-not-identical replacement). Use for year-end tax planning, monthly TLH scans, after market drawdowns, or when user mentions tax-loss harvesting, TLH, wash sale, or harvest candidates.
Assigns a category and subcategory to a financial transaction by matching its raw description against a configurable taxonomy and rules table, falling back to LLM inference when no rule matches. Emits a normalized merchant name, category path, recurring flag, and confidence score, and proposes new rules from confirmed classifications. Use when categorizing bank, credit-card, or brokerage transactions, building or refining a category taxonomy, or when user mentions transaction categorization, merchant normalization, expense classification, or category rules.
Detects and removes duplicate transactions across overlapping bank, credit-card, and brokerage statement imports using a stable composite key (account_id, date ±1d, amount_cents, description_normalized). Emits a list of new transactions to commit, a list of suppressed duplicates with their reasons, and a list of suspicious near-duplicates that need human review. Use when ingesting financial statements that may overlap prior drops, merging multiple export sources for the same account, or when user mentions duplicate transactions, deduping a transaction file, or reconciling overlapping statements.
Adapts content for different audiences while preserving core accuracy, changing tone, depth, emphasis, and framing to match audience expertise and goals. Use when technical content needs business framing, strategic vision needs tactical translation, expert knowledge needs simplification, formal content needs casual tone, long-form needs summarization, internal content needs external framing, or cross-cultural adaptation is needed. Use when user mentions "explain to", "reframe for", "translate for [audience]", "adapt for [executives/engineers/customers]", or "same content, different audience".
Appends an entry to substacker shared-context/analogy-catalog.md when the writer PUBLISHES a post that uses a new analogy. Not invoked on seed or draft — only on publish. Records source, target, post, freshness, mapping, where-it-breaks, and why-it-worked. Prevents silent recycling in future Intuition Builder runs. Use at publish time for any post that contains a non-trivial analogy. Trigger keywords: catalog, analogy catalog, update catalog, publish, analogy archive.
Appends one structured YAML observation block to substacker shared-context/audience-notes.md iff the week produced at least one observation with confidence ≥ medium. Includes supporting evidence (post slugs + numbers) and reviewed_by_curator flag. Never rewrites or deletes prior entries. Append-only discipline protects downstream agents' shared context. Use at the end of each weekly pipeline after write-weekly-report. Trigger keywords: audience notes, append observation, audience insight, confidence-rated.
Writes the canonical substacker shared-context/section-map.md after writer confirmation of review-artifact proposals. Atomic write with backup snapshot. Validates schema before writing. Use as the final step of a Curator run, only after writer has accepted/modified proposals. Trigger keywords: update section map, write section map, commit sections, apply changes.
Maintains substacker shared-context/topic-ledger.md as an append-and-update index of all topics in the corpus. Each topic row tracks seed/draft/published counts, last-touched date, top-3 seed ids by density, and a hot/warm/cold temperature indicator. Use after any seed is created, promoted to draft, published, or killed. Trigger keywords: ledger, topic index, update index, topic ledger, hot/cold topics.
Proposes adds and removes to the Trend Scout watchlist based on consecutive-failure sources, repeated-reference external authors, and user-added feedback markers. Emits a proposed diff at ops/trend-scout/watchlist-proposed-diff.md for user review. On explicit approval, applies the diff. Monthly cadence. Trigger keywords: watchlist update, add source, remove source, watchlist review, monthly review, source pruning.
Synthesizes intrinsic (DCF) and relative (multiples) valuation outputs into a final value estimate and investment recommendation. Reconciles divergent valuations, reverse-engineers what the market is pricing in (implied growth, implied ROIC), computes margin of safety, and produces a buy/sell/hold recommendation with catalysts. Use when combining multiple valuation approaches, making investment recommendations, reconciling DCF with multiples, or when user mentions reconcile valuations, investment recommendation, margin of safety, implied growth rate, or what is the market pricing in.
Given a current win probability and a downside asymmetry flag, recommends a variance-seeking, neutral, or variance-minimizing posture and emits a numeric multiplier (typically 0.8-1.3) for downstream consumers to apply to boom-bust scores, position sizes, or bet sizes. Favorites minimize variance; underdogs maximize it. Reusable across fantasy sports lineup construction, portfolio allocation, poker bankroll decisions, racing strategy, and any decision where the agent controls a variance knob. Use when user mentions variance strategy, underdog variance, variance seeking, variance minimizing, risk posture, boom bust, must-win variance, favorite strategy, or when a decision module needs a single scalar to bias toward or away from high-variance options.
Transforms data into compelling visual narratives by applying narrative structure, annotation techniques, scrollytelling patterns, and honest framing to data journalism, presentations, and infographics. Use when creating data-driven articles or reports, designing infographics with narrative, building scrollytelling experiences, annotating charts to guide interpretation, or when user mentions data storytelling, presentation design, annotated chart, narrative visualization.
Matches visualization types to data questions and creates narrated reports that highlight insights and recommend actions. Covers chart selection (comparison, trend, distribution, relationship, composition, geographic), perceptual best practices, and narrative reporting (headline, pattern, context, meaning, action). Use when analyzing data for patterns, building dashboards, presenting metrics, monitoring KPIs, or when user mentions "visualize this", "what chart should I use", "create a dashboard", "analyze this data", "show trends", "report on".
Scans a substacker draft line-by-line against the canonical voice-profile.md don't-list and signature moves. Emits phrase-level flags with location, quoted phrase, violation type, voice-profile citation, and up-to-2 suggested rewrites per flag. Use as pass-2 skill (voice) after structural-review completes, when a draft reads competent but not in the writer's voice, or when the writer asks "does this sound like me?" Trigger keywords: voice check, delve, unpack, paradigm shift, sounds AI, does this sound like me, voice violation.
Ranks a proposed set of framings against the writer's voice profile, especially the analogy-direction priority — biology > organizational > sports, with physics/military as voice violations. Produces a tier rating per framing and flags any framing that would break voice. Use in the Intuition Builder pipeline after generating framings, to order them by fit with the writer's register. Trigger keywords: voice fit, analogy direction, biology to AI, organizational to multi-agent, sports to calibration.
Composes the final Technical Reviewer artifact at ops/technical-reviewer/YYYY-MM-DD-{slug}-review.md. Enforces frontmatter schema, section order (Summary → Blockers → Claims → Boundary-Break Suggestions → Glossary Alignment → Could-Not-Verify → Research Log), go/no-go decision rule, and never-modify-draft principle. Use exactly once per Technical Reviewer run as the last step. Trigger keywords: write review, technical review artifact, compose review, claim review output.
Crafts the one-sentence promise a substacker section makes to its reader — specific, testable, non-overlapping with other sections, written in the writer's voice (not marketing). Use when propose-section stages a new section or when an existing promise is being revised. Trigger keywords: section promise, one-sentence promise, section statement, reader promise.
Renders the Trend Scout ranked keep-and-drop lists into ops/trend-scout/YYYY-WW-digest.md using the agent voice profile and including an appendix of all sources surveyed. Use once per weekly run as the terminal skill. Trigger keywords: weekly digest, write digest, Trend Scout digest, Saturday morning digest.
Composes the substacker final ops/growth-analyst/YYYY-WW-report.md from ingest + baseline + attribute + per-section + public-page outputs. Enforces 400-800 word budget, YAML frontmatter schema, seven-section body structure. Truncates weakest sections first when over budget. Injects data-caveats from any degraded-mode flags upstream. Use as the final compose step of the weekly pipeline. Trigger keywords: weekly report, compose report, growth report, Monday report.
Runs a comprehensive six-section quality checklist (content, structure, clarity, style, polish, final tests) before writing is shared or published, catching issues that revision and stickiness enhancement might miss. Use when performing final quality checks before sharing, publishing, or submitting writing, or when user mentions pre-publish, final check, ready to publish, last review, quality check, or about to share.
Applies a systematic three-pass revision system (Zinsser, King, Pinker, Clark) to existing drafts — Pass 1 cuts clutter, Pass 2 reduces cognitive load, Pass 3 improves rhythm. Use when revising, editing, or polishing drafts, cutting word count, tightening prose, improving readability, or fixing flow, or when user mentions revision, editing, cut clutter, too wordy, improve readability, fix the flow, reduce word count.
Applies the Heath brothers' SUCCESs model (Simple, Unexpected, Concrete, Credible, Emotional, Stories) to make messages memorable and persuasive, with systematic analysis, targeted improvements, and scoring (0-18 stickiness scorecard). Use when making messages more memorable or compelling, preparing presentations, crafting pitches or campaigns, or when user mentions stickiness, making ideas stick, persuasion, SUCCESs framework, or Heath brothers.
Guides writing architecture planning using McPhee's structural diagramming method, helping select from 8 structure types (list, chronological, circular, dual/triple profile, pyramid, parallel, custom), create visual diagrams, and place gold-coin moments for engagement. Use when planning or organizing writing structure, outlining before drafting, restructuring disorganized drafts, or when user mentions outlining, organizing ideas, structure planning, article architecture, narrative flow.
Rewrites a published substacker essay as three X thread variants (short 3-5 tweets, medium 6-8, long 9-12). Each tweet ≤280 chars. Hook tweet works standalone. No numbering by default (2026 convention for tech-first-principles accounts). Final tweet is the link. If essay doesn't translate to X, emits a VERDICT line and halts rather than producing weak variants. Trigger keywords: X thread, Twitter thread, thread, tweet, threaded post, thread variants.
Team-oriented workflow plugin with role agents, 27 specialist agents, ECC-inspired commands, layered rules, and hooks skeleton.
Uses power tools
Uses Bash, Write, or Edit tools
Upstash Context7 MCP server for up-to-date documentation lookup. Pull version-specific documentation and code examples directly from source repositories into your LLM context.
Comprehensive startup business analysis with market sizing (TAM/SAM/SOM), financial modeling, team planning, and strategic research
Comprehensive .NET development skills for modern C#, ASP.NET, MAUI, Blazor, Aspire, EF Core, Native AOT, testing, security, performance optimization, CI/CD, and cloud-native applications
Semantic search for Claude Code conversations. Remember past discussions, decisions, and patterns.