From gtm-skills
Generates cold outreach emails from contact CSV and self-contained prompt template. Sanitizes names via Python script, dry-runs samples, then runs API per row to output emails CSV.
npx claudepluginhub extruct-ai/gtm-skills --plugin gtm-skillsThis skill uses the workspace's default tool permissions.
Generate cold outreach emails from a contact CSV + prompt template. The prompt template is self-contained — it has all voice, research, value prop, proof points, and personalization rules baked in. This skill just runs it per row.
Generates self-contained Markdown prompt templates for cold outreach email campaigns from company context files, vertical research, hypotheses, and CSV enrichment columns. Auto-activates on cold email or outreach prompt mentions.
Writes personalized cold emails by researching prospects, crafting signal-based openers, and scoring drafts with a quality rubric. For sales outreach, prospecting, and B2B sequences.
Use when the user wants to verify cold emails, enrich a lead list, or autonomously guess email addresses from a CSV using ValidEmail.co or the open-source Reacher engine.
Share bugs, ideas, or general feedback.
Generate cold outreach emails from a contact CSV + prompt template. The prompt template is self-contained — it has all voice, research, value prop, proof points, and personalization rules baked in. This skill just runs it per row.
This skill is a runner, not a reasoner. All strategic reasoning (voice, value angles, proof points, research data) was done by the email-prompt-building skill at prompt-build time and embedded in the prompt template. This skill reads the prompt + CSV and generates emails. It does NOT read the context file, hypothesis set, or research files.
prompt template (.md) ─┐
├──▶ generate email per row ──▶ emails CSV
contact CSV ───────────┘
| Input | Source | Required |
|---|---|---|
| Contact CSV | File with recipient data + enrichment columns | yes |
| Prompt template | .md file from email-prompt-building skill | yes |
That's it. No context file, no hypothesis set, no research files.
The prompt template specifies which columns it needs. Check the prompt's "Enrichment data fields" section for the expected column names. Common columns:
Required (always):
first_name, last_name, company_name, job_titleEnrichment (campaign-specific): Listed in the prompt template. If the prompt references a field that's not in the CSV, the email quality degrades. Check column alignment before running.
Before generating emails, run scripts/sanitize-names.py on the contact CSV:
python3 scripts/sanitize-names.py <contact.csv> [output.csv]
The script strips titles (Dr, Prof, etc.), removes rows with single-character names, emoji, junk values (N/A, Test, -), and fixes all-caps casing. It outputs a *_sanitized.csv and prints what was cleaned/removed.
Review the removed rows before proceeding. Do not generate emails for rows with invalid names.
Script-first, not in-context. Always generate via a script that calls the API per contact. Never generate emails inside the conversation — it's slow, expensive, and impossible to rerun after prompt edits.
Before spending API credits, show the user a dry run:
Write a generation script that reads the prompt template + contact CSV, calls the API per row, and writes output files. See references/generation-script.md for the script template and implementation details.
Adapt the script to the user's API setup (Anthropic, OpenAI, etc.) and the specific prompt format.
Always generate two output files:
claude-code-gtm/csv/output/{campaign-slug}/emails.csv — for upload to sequencerclaude-code-gtm/csv/output/{campaign-slug}/emails.md — for human review (one email per section, with contact name and company as headers)After generating, verify:
When the contact CSV includes segmentation data (from list-segmentation):
Tier 1 companies:
email-response-simulation for review before sendingTier 2 companies:
hypothesis_numberTier 3 companies:
list-enrichment or list-buildingWhen the user gives feedback on generated emails, the workflow is always:
Never hand-edit individual emails. If one email is bad, the prompt is bad — fix the source. Track changes made to the prompt so the user can see the evolution.
If no prompt template exists for this campaign, use the email-prompt-building skill to build one. That skill reads the context file and research, then synthesizes a self-contained prompt. Do not build prompts ad hoc in this skill.