From nimble
Discovers businesses by type and geography using Nimble WSAs. Audit mode compares user lists from Google Sheets/CSV against fresh discoveries, categorizing matches and gaps. For market sizing and prospect lists.
npx claudepluginhub nimbleway/agent-skills --plugin nimbleThis skill is limited to using the following tools:
Market intelligence powered by Nimble Web Search Agents.
Builds targeted company lists for outbound campaigns using Extruct via decision tree: lookalikes from seeds, semantic/deep searches matching ICPs.
Identifies potential B2B clients matching service definitions and ideal client profiles using industry, size, location filters and 10-point fit scoring. Outputs prospects to Markdown files.
Guides interpretation of Local Falcon MCP data for AI visibility (ChatGPT, Gemini, Grok), geo-grid rankings, sentiment analysis, competitor research, GBP optimization, and local SEO strategies.
Share bugs, ideas, or general feedback.
Market intelligence powered by Nimble Web Search Agents.
User request: $ARGUMENTS
Before running any commands, read references/nimble-playbook.md for Claude Code
constraints (no shell state, no &/wait, sub-agent permissions, communication style).
Run the preflight pattern from references/nimble-playbook.md (5 simultaneous Bash
calls: date calc, today, CLI check, profile load, index.md load).
Also simultaneously:
mkdir -p ~/.nimble/memory/{reports,market-finder/checkpoints}ls ~/.nimble/memory/market-finder/checkpoints/ 2>/dev/nullFrom the results:
references/profile-and-onboarding.md, stopreferences/nimble-playbook.md. Market-finder tweak: in quick refresh mode,
skip enrichment and only discover new metros.Parse $ARGUMENTS for business type, geography, qualifiers, and mode detection.
Check $ARGUMENTS for a reference list. Read references/audit-mode.md for the
full detection signals and parsing rules.
| Signal | Mode |
|---|---|
| Google Sheet URL, CSV path, or inline list of 3+ businesses | Audit |
| Explicit audit language ("audit my list", "compare against", "gap analysis") | Audit |
| No reference list provided | Discovery (default) |
If a reference list is present but intent is ambiguous, ask: "Want me to audit your list against fresh discovery, or use it as a starting point?"
If audit language is detected but no reference list is provided, ask: "You mentioned auditing — please provide your list (Google Sheet URL, CSV file path, or paste inline)." Do not proceed with Audit mode until a reference list is received.
| Field | Required | Source |
|---|---|---|
| Business type / vertical | Yes | User input ("dentists", "SaaS CRM tools") |
| Geography | Yes (except SaaS) | User input ("Florida", "Austin TX", "nationwide") |
| Reference list | Audit mode only | Google Sheet URL, CSV path, or inline |
| Qualification criteria | Optional | User input ("must have website", "10+ reviews") |
| Output preference | Optional | User input ("quick summary", "full dataset") |
If both type and geography are clear from $ARGUMENTS, confirm briefly and
proceed: "Finding dentists in Florida..." (or "Auditing your list against
dentists in Florida..." in Audit mode)
If partial or ambiguous, ask one combined question (counts as 1 of max 2 AskUserQuestion prompts):
Use AskUserQuestion with up to 3 questions:
Skip questions already answered by $ARGUMENTS.
Depth modes (determines how much work each step does):
| Depth | Discovery | Enrichment | Verification | Distribution |
|---|---|---|---|---|
| Quick scan | All sources, 1 pass | Skip (or top 5 only) | Top 5 entities | Offer |
| Comprehensive | All sources + fallback retries | Full | All entities | Offer |
Read references/vertical-presets.md and match the user's business type against
preset trigger keywords.
| Match | Action |
|---|---|
| Clear match | Load that preset's WSA routing and query pattern |
| Partial match | Confirm: "This looks like Healthcare. Use healthcare presets?" |
| No match | Use Custom preset with user's keywords |
| SaaS match | Switch to non-geographic pipeline (no geo-tiling) |
Note which discovery WSAs and enrichment WSAs the preset specifies.
Skip this step for SaaS vertical (no geography needed).
| Geography level | Tiling strategy |
|---|---|
| City | Single query, no tiling |
| Metro area | Single query per WSA |
| State | Tile by top 5-10 metros in the state |
| Region | Tile by states, then top metros per state |
| Nationwide | Tile by all states, then top metros per state |
Estimate API calls: metros * discovery_wsas * (1 + enrichment_ratio) where
enrichment_ratio is ~0.3. Follow the Scaled Execution pattern from
references/nimble-playbook.md to choose execution tier (individual / batch /
multi-batch / confirmation gate):
Estimated API calls: ~1,560 (50 states x 8 metros x 3 WSAs + enrichment)
This is a nationwide search. Proceed? [Y/n]
Derive a slug for checkpointing: lowercase, hyphenated, includes vertical + geo
(e.g., dentists-florida, saas-crm-tools, hvac-nationwide).
Follow the Checkpointing & Resume pattern from references/memory-and-distribution.md.
Check: cat ~/.nimble/memory/market-finder/checkpoints/{slug}/discovery.json 2>/dev/null
For each target domain in the selected vertical preset, discover current WSAs:
nimble agent list --search "{domain}" --limit 20
Run these searches simultaneously (one per target domain). From the results:
managed_by: "nimble" over managed_by: "community"nimble search fallbacknimble search for all metrosThen validate each discovered WSA's input params:
nimble agent get --template-name {discovered_name}
Cache the discovered WSA names + params for the rest of the run.
For each metro in the tiling plan, run the discovered WSAs simultaneously:
nimble agent run --agent {maps_wsa} --params '{...validated params...}'
nimble agent run --agent {yelp_wsa} --params '{...validated params...}'
Run tertiary domain WSAs only if the preset includes them AND primary + secondary return < 10 combined unique results for that metro.
Choose execution tier per the Scaled Execution pattern in
references/nimble-playbook.md (based on total estimated calls from Step 3).
SaaS skips WSA discovery. Run the two-pass search queries defined in the SaaS
preset from references/vertical-presets.md:
Both passes run simultaneously. Pass 2 is critical -- without it, funding and traction data will be missing or wrong.
If no WSA was found for a target domain, or if a WSA fails for any metro:
nimble search --query "[type] in [metro]" --max-results 20 --search-depth lite
After discovery:
references/nimble-playbook.md: place_id -> domain -> fuzzy name + citysource_count per entity (how many WSAs/sources found it)~/.nimble/memory/market-finder/checkpoints/{slug}/discovery.jsonRun enrichment using the WSAs discovered in Step 5a for the preset's enrichment
target domains. Prioritize entities with the highest source count first. Choose
execution tier per Scaled Execution in references/nimble-playbook.md.
nimble agent run --agent {enrichment_wsa} --params '{...validated params...}'
Only run enrichment WSAs that apply to the current vertical's enrichment targets
(see references/vertical-presets.md). Skip entities without the required ID/URL
for the enrichment WSA.
Save checkpoint: ~/.nimble/memory/market-finder/checkpoints/{slug}/enrichment.json
For SaaS entities, verify funding claims before reporting. Never label a company's funding stage without a source.
For each entity in the top results (top 5 in quick scan, all in comprehensive):
nimble search --query "{company name} funding raised series" --max-results 5 --search-depth lite
This step prevents publishing unverified financial claims. It's fast (one search per entity, lite depth) and catches recent funding rounds that directory sites miss.
Final deduplication: Run a final dedup pass across all phases following the
Entity Deduplication pattern from references/nimble-playbook.md. Merge fields
from multiple sources into a single record per entity.
Discovery strength scoring (skill-specific, varies by vertical):
Geographic verticals (Healthcare, Restaurants, Legal, Auto/Home, Custom):
| Level | Criteria |
|---|---|
| High | 3+ sources OR 2+ sources with reviews > 50 |
| Medium | 2 sources OR 1 source with reviews > 10 |
| Low | 1 source only, few/no reviews |
SaaS vertical (funding + directory presence matter more than review count):
| Level | Criteria |
|---|---|
| High | Verified funding > $10M OR 3+ directory sources OR 1000+ G2 reviews |
| Medium | Verified funding < $10M OR 2 sources OR 100+ G2 reviews |
| Low | 1 source only, no verified funding, few reviews |
Display as: *** High, ** Medium, * Low
Skip this step in Discovery mode.
Read references/audit-mode.md for the full matching algorithm, normalization rules,
and output template.
{name, domain, city, state, phone} per the parsing rules
in references/audit-mode.mdmatched — in both reference list and discovery resultsdiscovered_only — found by discovery, not in reference listreference_only — in reference list, not found by discoverymatched / reference_count × 100Proceed to Step 8 with the categorized results.
# Market Finder: [Business Type] in [Geography]
*Found [N] businesses | [Date] | Strength: [H] High, [M] Medium, [L] Low*
## Summary
- **Total discovered:** [N] unique businesses across [M] metros
- **Geographic breakdown:** [top 5 metros by count]
- **Source coverage:** [list each source used with entity counts]
## Top Results (High Strength)
| # | Name | Location | Rating | Reviews | Strength | Sources |
|---|------|----------|--------|---------|----------|---------|
| 1 | Acme Dental | Miami, FL | 4.8 | 312 | *** High | Maps, Yelp, BBB |
| 2 | WidgetCo Health | Orlando, FL | 4.6 | 89 | *** High | Maps, Yelp |
...
## All Results by Geography
### Miami, FL ([n] businesses)
[Table of businesses in this metro]
### Orlando, FL ([n] businesses)
[Table of businesses in this metro]
...
## What's Missing
[Data gaps: metros with low coverage, entities without websites, etc.]
SaaS output variant (when vertical is SaaS, replace "All Results by Geography" with tier-based grouping):
## Players by Tier
### Pure-Play (dedicated to this vertical)
| # | Name | Domain | Funding | Key Metric | Strength | Sources |
...
### Adjacent (feature overlap from larger platforms)
| # | Name | Domain | Funding | Key Metric | Strength | Sources |
...
### Open Source
| # | Name | Repo | Stars | Key Metric | Strength | Sources |
...
Source links are mandatory. Every entity must have at least one clickable source URL (Google Maps link, Yelp listing, website, BBB profile, G2 page, or GitHub repo).
Audit output variant (when in Audit mode, replace the Discovery output above):
Use the audit output template from references/audit-mode.md. Key sections: Summary
with coverage score, Matched table, Discovered Only table (expansion candidates),
Reference Only table (coverage gaps), and "What This Means" interpretation.
Make all Write calls simultaneously:
Discovery mode:
~/.nimble/memory/reports/market-finder-{slug}-{date}.md~/.nimble/memory/market-finder/{slug}/entities.jsonAudit mode:
~/.nimble/memory/reports/market-finder-audit-{slug}-{date}.md~/.nimble/memory/market-finder/{slug}/audit-{date}.json
(all three categories with match metadata)Both modes:
last_runs.market-finder in ~/.nimble/business-profile.json
(only if profile exists)references/memory-and-distribution.md: update
index.md rows for all affected entity files, append a log.md entry for this run.Always offer distribution -- do not skip this step. Follow
references/memory-and-distribution.md for connector detection, sharing flow, and
source links enforcement.
Notion: full results table as a dated subpage. Slack: TL;DR with total count + top 10 entities only.
Discovery mode follow-ups:
Audit mode follow-ups:
Sibling skill suggestions:
Next steps:
- Run
company-deep-divefor a full 360 profile on any business from this list- Run
competitor-positioningto compare top players in this market- Run
local-placesfor neighborhood-level discovery with social enrichment and maps
For large jobs, nimble agent run-batch handles WSA parallelism server-side (see
Scaled Execution in references/nimble-playbook.md). Sub-agents are useful for
preparing batch inputs and processing results, not for running individual
WSA calls.
Use nimble-researcher agents (agents/nimble-researcher.md) when:
Follow the sub-agent spawning rules from references/nimble-playbook.md
(bypassPermissions, batch max 4, fallback on failure).
Check at startup: echo $CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS
Team mode (flag set): Spawn teammates for parallel phases:
Solo mode (flag not set): Standard sequential flow from Steps 5-8.
See references/nimble-playbook.md for the standard error table (missing API key,
429, 401, empty results, extraction garbage). Skill-specific errors:
--focus flag. If still failing,
retry with a simplified query. Log the failure but don't skip the entire search
category -- partial data is better than none.