From browserbase-cli
Discovers target companies to sell to using Browserbase Search API, deeply researches each via Plan-Research-Synthesize, scores ICP fit, outputs scored report and CSV. For prospecting and market research.
npx claudepluginhub browserbase/skills --plugin browserbase-cliThis skill is limited to using the following tools:
Discover and deeply research companies to sell to. Uses Browserbase Search API for discovery and a Plan→Research→Synthesize pattern for deep enrichment — outputting a scored research report and CSV.
Generates sourced 360° reports on specific companies using real-time Nimble web data APIs, covering funding, leadership, products/tech, market position, news, and strategic outlook. Activates on company research queries.
Researches prospect companies using public sources like websites, Crunchbase, LinkedIn, GitHub; assesses fit for configured products and generates structured Markdown reports.
Researches leads by gathering company info (industry, size, tech stack, news), contact details (CEO/CTO emails/phones), and buyer intent signals (job postings, tech usage).
Share bugs, ideas, or general feedback.
Discover and deeply research companies to sell to. Uses Browserbase Search API for discovery and a Plan→Research→Synthesize pattern for deep enrichment — outputting a scored research report and CSV.
Required: BROWSERBASE_API_KEY env var and bb CLI installed.
First-run setup: On the first run you'll be prompted to approve bb fetch, bb search, cat, mkdir, sed, etc. Select "Yes, and don't ask again for: bb fetch:*" (or equivalent) for each to auto-approve for the session. To permanently approve, add these to your ~/.claude/settings.json under permissions.allow:
"Bash(bb:*)", "Bash(bunx:*)", "Bash(bun:*)", "Bash(node:*)",
"Bash(cat:*)", "Bash(mkdir:*)", "Bash(sed:*)", "Bash(head:*)", "Bash(tr:*)", "Bash(rm:*)"
Path rules: Always use the full literal path in all Bash commands — NOT ~ or $HOME (both trigger "shell expansion syntax" approval prompts). Resolve the home directory once and use it everywhere. When constructing subagent prompts, replace {SKILL_DIR} with the full literal path.
Output directory: All research output goes to ~/Desktop/{company_slug}_research_{YYYY-MM-DD}/. This directory contains one .md file per researched company plus a final .csv. The user gets both the scored spreadsheet and the full research files on their Desktop.
CRITICAL — Tool restrictions (applies to main agent AND all subagents):
bb search. NEVER use WebSearch.node {SKILL_DIR}/scripts/extract_page.mjs "<url>". This script fetches via bb fetch, parses title + meta tags + visible body text, and automatically falls back to bb browse when the page is JS-rendered or over 1MB. NEVER hand-roll a bb fetch | sed pipeline — it silently strips meta tags and doesn't handle the JSON envelope. NEVER use WebFetch.{OUTPUT_DIR}/{company-slug}.md using bash heredoc. NEVER use the Write tool or python3 -c. See references/example-research.md for the file format.node {SKILL_DIR}/scripts/compile_report.mjs {OUTPUT_DIR} --open — generates HTML report and CSV in one step, opens overview in browser.node {SKILL_DIR}/scripts/list_urls.mjs /tmp after discovery.list_urls.mjs for dedup.CRITICAL — Anti-hallucination rules (applies to main agent AND all subagents):
product_description, industry, or target_audience from a site's fonts, framework (Framer/Next.js/React), design system, or typography. These are cosmetic and say nothing about what the company sells.Unknown — do not pattern-match them onto the ICP.product_description MUST quote or paraphrase a specific phrase from extract_page.mjs output (TITLE, META_DESCRIPTION, OG_DESCRIPTION, HEADINGS, or BODY). If none of those fields yield a recognizable product statement, write Unknown — homepage content not accessible.product_description is Unknown, cap icp_fit_score at 3 and set icp_fit_reasoning to Insufficient evidence — homepage returned no readable content.CRITICAL — Minimize permission prompts:
&& chaining.Follow these 5 steps in order. Do not skip steps or reorder.
Before starting, create the output directory on the user's Desktop:
OUTPUT_DIR=~/Desktop/{company_slug}_research_{YYYY-MM-DD}
mkdir -p "$OUTPUT_DIR"
Replace {company_slug} with the user's company name (lowercase, hyphenated) and {YYYY-MM-DD} with today's date. Pass {OUTPUT_DIR} (as a full literal path, not with ~) to all subagent prompts so they write research files there.
Also clean up discovery batch files from prior runs:
rm -f /tmp/company_discovery_batch_*.json
This is the most important step. The quality of everything downstream depends on deeply understanding the user's company.
Ask the user for their company name or URL
Check for an existing profile:
{SKILL_DIR}/profiles/ (ignore example.json)Run a full deep research on the user's company using the Plan→Research→Synthesize pattern.
See references/research-patterns.md for sub-question templates and research methodology.
Key research steps:
bb search "{company name}" --num-results 10node {SKILL_DIR}/scripts/extract_page.mjs "{company website}"/about or /customers):
bb fetch --allow-redirects "{company website}/sitemap.xml" — sitemap is small, raw bb fetch is finecustomer, case-stud, pricing, about, use-case, industry, solution/llms.txt for page descriptionsextract_page.mjs (NOT raw bb fetch)Synthesize into a profile: Company, Product, Existing Customers, Competitors, Use Cases. Do NOT include ICP or sub-verticals — those are per-run decisions.
Present the profile to the user for confirmation. Do not proceed until confirmed.
Save the confirmed profile to {SKILL_DIR}/profiles/{company-slug}.json
Ask clarifying questions using AskUserQuestion with checkboxes:
| Mode | Research per company | Best for |
|---|---|---|
quick | Homepage + 1-2 searches | ~100 companies, broad scan |
deep | 2-3 sub-questions, 5-8 tool calls | ~50 companies, solid research |
deeper | 4-5 sub-questions, 10-15 tool calls | ~25 companies, full intelligence |
Formula: ceil(requested_companies / 35) search queries needed. Over-discover by ~2-3x because filtering typically drops 50-70%.
Generate search queries with these patterns:
Process:
bb search "{query}" --num-results 25 --output /tmp/company_discovery_batch_{N}.json
node {SKILL_DIR}/scripts/list_urls.mjs /tmpSee references/workflow.md for subagent prompt templates and wave management.
Launch subagents to research companies in parallel. See references/workflow.md for the enrichment subagent prompt template. See references/research-patterns.md for the full research methodology.
Process:
Split filtered URLs into groups per subagent (quick: ~10, deep: ~5, deeper: ~2-3)
Launch ALL enrichment subagents at once (up to ~6 per message)
Each subagent uses ONLY Bash — for each company:
Phase A — Plan (skip in quick mode): Decompose into 2-5 sub-questions based on ICP and enrichment fields.
Phase B — Research Loop: Search and fetch pages, extract findings. Respect step budget (quick: 2-3, deep: 5-8, deeper: 10-15).
Phase C — Synthesize: Score ICP fit 1-10 with evidence. Fill enrichment fields from findings.
Subagents write ALL markdown files in a SINGLE Bash call using chained heredocs to {OUTPUT_DIR}/
After ALL subagents complete, proceed to Step 5
Critical: Include the confirmed ICP description verbatim in every subagent prompt. Pass the full literal {OUTPUT_DIR} path to every subagent.
Generate HTML report + CSV (opens overview in browser automatically):
node {SKILL_DIR}/scripts/compile_report.mjs {OUTPUT_DIR} --open
This generates:
{OUTPUT_DIR}/index.html — overview page with scored table (opens in browser){OUTPUT_DIR}/companies/*.html — individual company pages (linked from overview){OUTPUT_DIR}/results.csv — scored spreadsheet for import into sheets/CRMPresent a summary in chat too:
## Company Research Complete
- **Total companies researched**: {count}
- **Depth mode**: {mode}
- **Score distribution**:
- Strong fit (8-10): {count}
- Partial fit (5-7): {count}
- Weak fit (1-4): {count}
- **Report opened in browser**: ~/Desktop/{company_slug}_research_{date}/index.html
| Company | Score | Product | Industry | Fit Reasoning |
|---------|-------|---------|----------|---------------|
| Acme | 9 | AI inventory management | E-commerce SaaS | Series A, uses Selenium, expanding to EU |
Offer to dig deeper into specific companies, adjust scoring criteria, or re-run discovery with different queries.