From openfunnel
Spots companies using specific tech stacks (e.g., Snowflake, Kubernetes, Terraform) via job postings. Detects recent adoptions as leading indicators of buying behavior for sales leads.
npx claudepluginhub openfunnel/openfunnel-skillsThis skill uses the workspace's default tool permissions.
Spot companies using a specific tool, platform, or technology — inferred from their job postings. What a company requires in job posts tells you what they're running in production.
Identifies high-quality leads for products/services by analyzing codebases, ideal customer profiles, company tech stacks, growth signals, and pain points. Ranks leads and suggests outreach strategies.
Researches leads by gathering company info (industry, size, tech stack, news), contact details (CEO/CTO emails/phones), and buyer intent signals (job postings, tech usage).
Gathers competitive intelligence via anysite MCP server from LinkedIn, social media, Y Combinator, and web. Tracks competitors, hiring patterns, content strategies, and market positioning for strategic analysis.
Share bugs, ideas, or general feedback.
Spot companies using a specific tool, platform, or technology — inferred from their job postings. What a company requires in job posts tells you what they're running in production.
Technographic data alone is a trait (static). Combined with timing — "just adopted Snowflake" vs "has used Snowflake for 5 years" — it becomes an inferred pain-point and a leading indicator of buying behavior.
This skill bundles two scripts in the same directory as this SKILL.md file. Never read or reference API credentials directly.
signup.sh — handles authentication. Writes credentials to .env internally. Never exposes the API key.api.sh — handles all authenticated API calls. Reads credentials from .env internally.First, resolve the script paths relative to this file's location:
SKILL_DIR="$(dirname "$(find ~/.agents/skills -name SKILL.md -path "*/spot-companies-using-specific-tech-stack/*" 2>/dev/null | head -1)")"
API="$SKILL_DIR/api.sh"
SIGNUP="$SKILL_DIR/signup.sh"
Then use $SIGNUP for auth and $API for all other calls.
Input must be a specific tool name — not general descriptions.
Good inputs: Kubernetes, Snowflake, React, Terraform, dbt, Kafka, Datadog, HubSpot
Bad inputs: "cloud infrastructure" (too general), "data tools" (not specific), "modern stack" (meaningless)
api.sh.Before anything, test if credentials are working by running:
bash "$API" POST /api/v1/signal/get-signal-list '{"pagination": {"limit": 1, "offset": 0}}'
If the call succeeds (returns JSON with signals): skip to Step 1.
If the call fails (returns an error or missing credentials message):
### Welcome to OpenFunnel
OpenFunnel turns daily events in your market into pipeline
— using OpenFunnel's Event Intelligence engine.
To get started, I'll authenticate you via the API.
**What's your work email?**
Wait for user input. Then:
bash "$SIGNUP" start "<user_email>"
{"status": "verification_code_sent", "email": "..."} on successI sent a 6-digit verification code to **{email}**. Reply with the code.
bash "$SIGNUP" verify "<user_email>" "<code>"
{"status": "authenticated", "user_id": "..."}. Credentials are written to .env and .gitignore is updated automatically.{"status": "failed", ...}bash "$API" POST /api/v1/signal/get-signal-list '{"pagination": {"limit": 1, "offset": 0}}'What specific technology is the user looking for? If they give a vague description, ask for the specific tool name.
The deploy endpoint takes three fields:
technographic_list — the primary tool names (e.g., ["Kubernetes", "K8s"])technographic_variations — common abbreviations, alternate names, related terms (e.g., ["k8s", "kubectl", "EKS", "AKS", "GKE"])technography_context — a short sentence explaining what you're looking for (e.g., "Companies running Kubernetes for container orchestration")Help the user build these three fields from their input.
Timeframe: Last day to last year. Default: last 3 months. Shorter timeframes surface recent adopters. Longer timeframes surface established users.
Run bash "$API" POST /api/v1/signal/get-signal-list '{"pagination": {"limit": 100, "offset": 0}}' to get all currently deployed signals.
A signal is unique by query + ICP pair. When checking for matches, compare BOTH:
icp.id must match the user's intended ICP.If potential match found (query + ICP both match):
I found an existing signal that covers this:
**{signal_name}** (ID: {signal_id})
**ICP:** {icp.name}
Want to use this one, or deploy a new signal?
If query matches but ICP is different:
I found a signal with a similar query but a different ICP:
**{signal_name}** (ID: {signal_id})
**ICP:** {icp.name}
This uses a different ICP than what you need. Want to:
1. Use this one anyway
2. Deploy a new signal with the right ICP
Wait for user input.
Run bash "$API" POST /api/v1/signal/ '{"signal_id": <id>}' to get accounts and people matched by this signal.
### Results from: {signal_name}
**{total_accounts} accounts found | {total_people} people found**
If the user wants full details, run bash "$API" POST /api/v2/account/batch '{"account_ids": [<ids>]}'.
After presenting:
Would you like to:
1. See full details on specific accounts
2. Narrow results with filters (size, funding, location)
3. Deploy an additional signal for broader coverage ⚡ *uses credits*
Fetch available ICP profiles: bash "$API" GET /api/v1/icp/list.
If ICPs exist: present them and let the user pick one, or "none" to skip.
If the user types "none" or skips ICP selection:
Auto-create a broad fallback ICP:
bash "$API" POST /api/v1/icp/create '{"name": "Broad Default ICP", "target_roles": ["Any"], "employee_ranges": ["1-10", "11-50", "51-200", "201-500", "501-1000", "1001-5000", "5001-10000", "10001+"], "location": ["Any"]}'
No ICP selected, so I created a broad fallback ICP: **{name}** (ID: {id})
Using this ICP for your signal.
If no ICPs exist:
You don't have an ICP profile yet. A quick one will make results much sharper —
it filters by company size, location, and the roles you're targeting.
1. **Quick setup** (recommended) — takes 30 seconds
2. **Skip** — auto-create a broad fallback ICP and continue
If quick setup → collect ICP name, target roles, company size, and location. Create via bash "$API" POST /api/v1/icp/create '...'.
If skip → auto-create the broad fallback ICP as above.
I'll deploy a **technography** signal:
**Name:** {auto-generated descriptive name}
**Tech:** {technographic_list}
**Variations:** {technographic_variations}
**Context:** {technography_context}
**Timeframe:** {default — 90 days}
**ICP:** {selected or created ICP name}
⚡ *This will use credits from your plan.*
Other options:
- **Repeat daily** — re-run this signal every day for continuous monitoring
- **Audience name** — auto-add results to a named audience
- **Credit limit** — cap spending on this signal
Set any of these, or "deploy" to go with defaults.
Wait for user input. Then deploy:
bash "$API" POST /api/v1/signal/deploy/technography-search-agent '{"name": "<name>", "technographic_list": ["<tech>"], "technographic_variations": ["<variations>"], "technography_context": "<context>", "timeframe": <days>, "icp_id": <id>, "repeat": <true|false>}'
Signal deployed: **{name}** (ID: {signal_id})
This is now scanning job posts for companies using this technology.
Results come in as they're found — just say "check on {signal_name}" anytime.
Run bash "$API" POST /api/v1/signal/ '{"signal_id": <id>}' to get results found so far.