From gtm-skills
Generates self-contained Markdown prompt templates for cold outreach email campaigns from company context files, vertical research, hypotheses, and CSV enrichment columns. Auto-activates on cold email or outreach prompt mentions.
npx claudepluginhub extruct-ai/gtm-skills --plugin gtm-skillsThis skill uses the workspace's default tool permissions.
Generate self-contained prompt templates for cold outreach campaigns. Each prompt encodes everything the email generator needs: voice, research data, value prop, proof points, and personalization rules. No external file references at runtime.
Writes personalized cold emails by researching prospects, crafting signal-based openers, and scoring drafts with a quality rubric. For sales outreach, prospecting, and B2B sequences.
Writes B2B cold emails and follow-up sequences that earn replies. Use for outbound prospecting, SDR outreach, personalized openings, subject lines, CTAs, and multi-touch sequences.
Generates B2B cold emails and multi-touch follow-up sequences with personalized subject lines, openings, body, and CTAs for higher reply rates.
Share bugs, ideas, or general feedback.
Generate self-contained prompt templates for cold outreach campaigns. Each prompt encodes everything the email generator needs: voice, research data, value prop, proof points, and personalization rules. No external file references at runtime.
This skill is a generator, not a template. It reads the company context file and campaign research, reasons about what fits this specific audience, and produces a self-contained prompt. Each campaign gets its own prompt. Each company gets its own context file. Nothing is hardcoded in this skill.
BUILD TIME (this skill)
┌─────────────────────────────────────┐
context file ────────────▶│ │
research / hypothesis ───▶│ Synthesize into self-contained │──▶ prompt template (.md)
enrichment column list ──▶│ prompt with reasoning baked in │
└─────────────────────────────────────┘
RUN TIME (email-generation skill)
┌─────────────────────────────────────┐
prompt template (.md) ───▶│ │
contact CSV ─────────────▶│ Generate emails per row │──▶ emails CSV
└─────────────────────────────────────┘
| Input | Source | What to extract |
|---|---|---|
| Context file | claude-code-gtm/context/{company}_context.md | Voice, sender, value prop, proof library, key numbers, banned words |
| Research | claude-code-gtm/context/{vertical-slug}/sourcing_research.md | Verified data points, statistics, tool comparisons |
| Hypothesis set | claude-code-gtm/context/{vertical-slug}/hypothesis_set.md | Numbered hypotheses with mechanisms and evidence |
| Enrichment columns | CSV headers from list-enrichment output | Field names and what they contain |
| Campaign brief | User describes audience, roles, goals | Target vertical, role types, campaign angle |
A single .md file at claude-code-gtm/prompts/{vertical-slug}/en_first_email.md containing:
Read these files before writing anything:
claude-code-gtm/context/{company}_context.md
claude-code-gtm/context/{vertical-slug}/sourcing_research.md
claude-code-gtm/context/{vertical-slug}/hypothesis_set.md
Also read the contact CSV headers. Before writing any prompt rules, check which enrichment fields actually exist in the CSV. Only reference fields that are present. If the prompt needs a field that isn't there, either ask the user to add it via enrichment or drop that rule.
Check persona spread. If the contact list spans multiple personas (e.g., executives + ICs + ops), recommend splitting into separate prompts per role cluster. One prompt trying to handle all roles produces generic output. Flag this to the user before proceeding.
This is where the skill does real work. For each section of the prompt:
Voice → from context file:
## Voice section. Copy sender name, tone, constraints, banned words into the prompt.P1 → from research + hypotheses:
platform_type enrichment field or derive the actual description from the company profile. If the enrichment data doesn't include platform_type, instruct the generator to describe what the company actually does based on its description.Competitive awareness rules (embed in P1/P2):
P2 → from context file → What We Do:
P4 → from context file → Proof Library:
Select proof points based on THREE dimensions:
| Dimension | Logic |
|---|---|
| Peer relevance | Proof company should be same size or larger than prospect. Never cite a smaller company as proof to a bigger one. |
| Hypothesis alignment | Proof point should validate the same hypothesis used in P1. |
| Non-redundancy | If a stat appears in P2, do NOT repeat it in P4. |
If no proof point meets all three criteria, drop P4 entirely (use a shorter structural variant instead).
Banned phrasing → from context file + campaign-specific:
Write the .md file following this skeleton:
[Role line from context → Voice → Sender]
[Core pain — 2-3 sentences from research. Not generic.]
## Hard constraints
[From context → Voice. Copied verbatim.]
## Research context
[Verified data points from sourcing_research.md. Actual numbers, tool names,
coverage gaps. This is the foundation for P1.]
## Enrichment data fields
[Table: field name → what it tells you → how to use it in the email]
## Hypothesis-based P1
[Per hypothesis: mechanism, evidence, usage rules.
All grounded in research data.]
## Role-based emphasis
[Map role keywords → emphasis. Use specific data points.]
## Structural variants
[Select variant per recipient based on role + seniority from enrichment data.
See "Structural Variants" section below for definitions.]
## Competitive awareness
[Rules for handling prospects with overlapping capabilities.]
## Proof point selection
[Three-dimensional selection: peer relevance, hypothesis alignment, non-redundancy.]
## Example query rules
[Must reference prospect's actual vertical. Never reuse across prospects.]
P1 — [Rules referencing hypotheses and enrichment fields. Use actual platform description, not generic framing.]
P2 — [Synthesized value angles per hypothesis. Key numbers from context. Vertical-specific example queries.]
P3 — [CTA rules with campaign-specific examples]
P4 — [Proof points with conditions. Drop entirely if no proof meets all three criteria.]
## Subject line rules
[Subject references the prospect's problem, not your product. Never sound like
you're selling data or leads. No "boost your pipeline" or "better lead lists."
Frame around THEIR challenge: coverage gap, manual process, missed deals.]
## Output format
[JSON keys]
## Banned phrasing
[From context → Voice + campaign additions]
## Example emails
[Include 2-3 full example emails as demonstrations. Models follow examples
better than instructions. Each example should show a different structural
variant or hypothesis. Annotate each with which variant, hypothesis, and
enrichment fields it uses.]
Every generated prompt must include these rules verbatim. These are the most common ways cold emails fail:
Before saving, verify:
claude-code-gtm/prompts/{vertical-slug}/en_first_email.md
claude-code-gtm/prompts/{vertical-slug}/en_follow_up_email.md (if follow-up needed)
Select structure based on role + seniority from enrichment data. These are defaults. Override from context file or user input.
4 paragraphs, ≤120 words.
3 paragraphs, ≤90 words. No PS.
2-3 paragraphs, ≤70 words. Forwardable.
2 paragraphs, ≤60 words. Peer-to-peer tone.
See references/prompt-patterns.md for patterns distilled from past campaigns.