From gtm-skills
Designs enrichment columns bridging research hypotheses to list enrichment. Segmentation mode scores company fit; personalization captures hooks. Interactive process outputs column_configs.
npx claudepluginhub extruct-ai/gtm-skills --plugin gtm-skillsThis skill uses the workspace's default tool permissions.
Bridge the gap between research hypotheses and table enrichment. Define WHAT to research about each company before running enrichment.
Adds research-powered enrichment columns (e.g., funding, verticals, tech stack) to Extruct company tables. Use to run column configs from enrichment-design, design on-the-fly, trigger runs, or monitor progress.
Interprets firmographic enrichment data to normalize fields, score ICP fit, detect buying triggers like funding or hiring spikes, segment accounts, and recommend GTM/ABM outreach.
Designs B2B lead enrichment waterfalls, ICP scoring frameworks with firmographic/technographic/intent signals, and contact verification pipelines using Clay, Apollo, ZoomInfo.
Share bugs, ideas, or general feedback.
Bridge the gap between research hypotheses and table enrichment. Define WHAT to research about each company before running enrichment.
market-research has produced a hypothesis setlist-enrichment — this skill designs the columns, that skill runs themGoal: Design columns that score or confirm hypothesis fit per company.
Input: Hypothesis set (from market-research or context file)
Process:
column_configsExample: If hypothesis is "Database blind spot — 80-90% of targets invisible to standard tools":
Goal: Design columns that capture company-specific hooks for email personalization.
Input: Target list + what the user wants to personalize on
Process:
column_configsExample: For personalization hooks:
Do NOT just generate columns silently. Walk through this with the user:
Step 1: Present the framework
Show the user the two modes and ask which applies (or both).
Step 2: Propose initial columns
Based on hypotheses or user input, propose 3-5 columns. For each, show:
Column: [name]
Type: [output_format]
Agent: [research_pro | llm]
Prompt: [the actual prompt text]
Why: [what this tells us for segmentation/personalization]
Step 3: Refine together
Ask:
Step 4: Confirm column budget
Guidance:
Step 5: Output column_configs
Generate the final column configs as a JSON array ready for list-enrichment:
[
{
"kind": "agent",
"name": "Column Display Name",
"key": "column_key_snake_case",
"value": {
"agent_type": "research_pro",
"prompt": "Research prompt using {input} for domain...",
"output_format": "text"
}
}
]
| Data point type | Agent type | Why |
|---|---|---|
| Factual data from the web (funding, launches, news) | research_pro | Needs web research |
| Classification from company profile | llm | Profile data is enough |
| Nuanced judgment (maturity, fit score) | research_reasoning | Needs chain-of-thought |
| People/org structure | linkedin | LinkedIn-specific |
| Data point type | Format | When |
|---|---|---|
| Free-form research | text | Open-ended questions |
| Score/rating | grade | 1-5 scale assessments |
| Category | select | Mutually exclusive buckets |
| Multiple tags | multiselect | Non-exclusive tags |
| Structured data | json | Multiple related fields |
| Yes/no with evidence | json | {"match": bool, "evidence": str} |
{input} for the company domainselect/multiselect: list the labels in the prompt tooSee references/data-point-library.md for ~20 pre-built column configs organized by use case.
After column design is complete:
column_configs JSON to the userlist-enrichment. Run that skill with your table ID and these columns."list-enrichment workflow