From nimble
Searches live web via Nimble CLI for competitor news, product launches, hiring signals, funding; compares prior findings to highlight updates. For competitive intelligence requests.
npx claudepluginhub nimbleway/agent-skills --plugin nimbleThis skill is limited to using the following tools:
Real-time competitive intelligence powered by Nimble's web data APIs.
Gathers competitive intelligence from websites, social media, ads, news, reviews, job postings, and product updates; analyzes positioning, identifies patterns, and generates strategic recommendations.
Monitors competitor signals (product, pricing, hiring, partnerships, messaging) and generates structured weekly intelligence with diffs, threat levels, implications, and roadmap actions.
Generates sourced 360° reports on specific companies using real-time Nimble web data APIs, covering funding, leadership, products/tech, market position, news, and strategic outlook. Activates on company research queries.
Share bugs, ideas, or general feedback.
Real-time competitive intelligence powered by Nimble's web data APIs.
User request: $ARGUMENTS
Before running any commands, read references/nimble-playbook.md for Claude Code
constraints (no shell state, no &/wait, sub-agent permissions, communication style).
Run the preflight pattern from references/nimble-playbook.md (5 simultaneous Bash
calls: date calc, today, CLI check, profile load, index.md load).
From the results:
references/profile-and-onboarding.md, stop~/.nimble/memory/competitors/index.md to identify which
competitor files exist and their last-updated dates. If the index doesn't exist
(first run or upgrade), fall back to reading all ~/.nimble/memory/competitors/*.md
directly — the index is an optimization, not a gate. Then load the relevant
competitor files for known signals
(used for dedup in Steps 3 + 5). Follow cross-references ([[path/entity]] links)
to load related context. Determine mode using smart date windowing
from references/nimble-playbook.md:
last_runs.competitor-intel is today, check if a report
already exists at ~/.nimble/memory/reports/competitor-intel-[today].md. If so,
ask: "Already ran today. Run again for fresh data?" Don't silently re-run.Note: Step 2 (WSA Discovery) runs after onboarding but before any research.
Prompt 1 — ask in plain text (NOT AskUserQuestion with options):
"What's your company's website domain? (e.g., acme.com)"
Verify — make two Bash calls simultaneously:
nimble search --query "[domain]" --include-domain '["[domain]"]' --max-results 3 --search-depth litenimble search --query "[domain] company" --max-results 5 --search-depth litePrompt 2 — confirm company + choose competitor method (use AskUserQuestion):
I found that [Company] ([domain]) is [brief description]. Is this right? And how should I find your competitors?
- Yes — find competitors for me
- Yes — I'll list them myself
- Wrong company — let me clarify
If "find competitors", make three Bash calls simultaneously:
nimble search --query "[Company] competitors" --max-results 10 --search-depth litenimble search --query "[Company] vs" --max-results 10 --search-depth litenimble search --query "[Company] alternatives" --max-results 5 --search-depth litePropose the list. Once the user confirms, create the profile and start Steps 2+3.
When creating the profile, also ask for or infer each competitor's domain and the
user's industry keywords. See references/profile-and-onboarding.md for the full
profile schema (company, competitors with domains/categories, industry_keywords,
integrations, preferences).
For each competitor domain and the user's domain, discover available WSAs:
nimble agent list --search "{domain}" --limit 20
Run one search per domain simultaneously. From the results, filter for WSAs with
entity_type matching SERP or PDP, prefer managed_by: "nimble", and validate
each with nimble agent get --template-name {name}. Cache discovered WSA names +
params for the run. Use discovered WSAs alongside nimble search in Steps 3-4
for richer data. If no WSAs found, continue with nimble search alone.
Use --include-domain to avoid noise from generic company names. Make two Bash calls:
nimble search --query "product updates OR changelog OR releases" --include-domain '["[company-domain]"]' --start-date "[start-date]" --max-results 5 --search-depth litenimble search --query "[UserCompany] news" --focus news --start-date "[start-date]" --max-results 5 --search-depth liteFallback if < 3 results: nimble search --query "blog" --include-domain '["[company-domain]"]' --max-results 5 --search-depth lite
Read references/competitor-agent-prompt.md for the full agent prompt template.
Follow the sub-agent spawning rules from references/nimble-playbook.md
(bypassPermissions, batch max 4, explicit Bash instruction, fallback on failure).
Spawn nimble-researcher agents (agents/nimble-researcher.md) with
mode: "bypassPermissions". Customize the prompt template with each competitor's
name, domain, start-date, known signals from memory (loaded in Step 0), and any
discovered WSA names from Step 2 so agents can use them for enrichment.
Call estimation & Scaled Execution: Before launching agents, estimate total API
calls: ~6 searches per competitor × N competitors + ~2 industry searches + extractions.
For 2+ competitors (12+ calls), tell agents to use extract-batch for page extractions
instead of individual calls. See the Scaled Execution pattern in
references/nimble-playbook.md for tier selection.
Also run industry searches directly (not in sub-agents), using industry_keywords
from the business profile:
nimble search --query "[industry_keyword] AI agents OR automation" --focus news --start-date "[start-date]" --max-results 5 --search-depth litenimble search --query "[industry_keyword] regulation OR compliance OR pricing" --focus news --start-date "[start-date]" --max-results 5 --search-depth liteExtract signals that need date verification OR richer detail. See
references/nimble-playbook.md → "Signal Date Validation" → "Verification Budget"
for the full rules.
Must extract:
DATE_CONFIDENCE: LOW — event date needs verification from page contentSOURCE_TYPE: DERIVATIVE — confirm the event date from the actual
page contentExtract if useful:
Skip: P3 signals with DATE_CONFIDENCE: HIGH.
Make one Bash call per URL, all simultaneously:
nimble extract --url "https://..." --format markdown
For extraction failures, follow the fallback in references/nimble-playbook.md.
When reading extracted content, determine the actual event date from the article body (not just the page header date). Look for: explicit dates tied to the event, temporal language ("last September", "in Q3"), and datelines.
Before building the report, validate every signal's freshness. See
references/nimble-playbook.md → "Signal Date Validation" for the full pattern.
For each signal from Step 3, classify it:
| Check | Result | Action |
|---|---|---|
| EVENT_DATE within freshness window + not in memory | NEW | Include |
| EVENT_DATE within window + updates a known signal | UPDATED | Include as update |
| EVENT_DATE outside freshness window | STALE | Drop — old event, new article |
| DATE_CONFIDENCE: LOW + couldn't verify in Step 4 | UNCERTAIN | Drop with note |
P1 corroboration (mandatory) — any P1 signal with NEEDS_CORROBORATION: true MUST
be corroborated before it can enter the report. This is a hard gate, not a suggestion.
For each flagged P1, run:
nimble search --query "[Company] [event summary]" --max-results 5 --search-depth lite
Look for the primary source (company blog, press release, official filing). If the primary source dates the event outside the freshness window, reclassify as STALE. If no primary source is found, reclassify as UNCERTAIN and drop.
Drop rules:
After validation, you should have a clean list of NEW and UPDATED signals only.
Full mode (first run or > 14 days since last) — structured briefing:
Quick refresh mode (last run < 14 days) — short format:
Core rules:
~/.nimble/memory/competitors/*.md — only surface NEW findings.Only persist signals that passed Step 5.5 validation (classified as NEW or UPDATED). Do not write STALE or UNCERTAIN signals to competitor memory files.
Make all Write calls simultaneously:
~/.nimble/memory/reports/competitor-intel-[date].md (save the full
briefing, not a summary — this is the local source of truth)~/.nimble/memory/competitors/[name].md
(use the format documented in references/memory-and-distribution.md). Add
[[path/entity]] cross-references for relationships discovered during research
(e.g., key people → [[people/name]], related competitors → [[competitors/name]]).last_runs.competitor-intel in ~/.nimble/business-profile.jsonreferences/memory-and-distribution.md: update
index.md rows for all affected entity files, append a log.md entry for this run.If 3+ competitors were researched in this run, OR the existing
~/.nimble/memory/synthesis/competitive-landscape.md has stale source timestamps
(source entity files were updated since generation), generate or refresh the synthesis
page.
Use the nimble-analyst agent (agents/nimble-analyst.md) with
mode: "bypassPermissions" to synthesize patterns across all competitor files. The
agent should read all ~/.nimble/memory/competitors/*.md files and produce a
competitive-landscape.md following the format in
references/memory-and-distribution.md — market map, feature comparison, pricing
comparison, key patterns, and strategic implications. Cite source entity files with
[[competitors/name]] links.
Also append any unanswered questions to ~/.nimble/memory/backlog.md
(e.g., competitors where key data like pricing or funding is missing).
After generating, update index.md with the synthesis page entry.
Always offer distribution — do not skip this step. Follow
references/memory-and-distribution.md for connector detection, sharing flow, and
source links enforcement.
preferences.skip_competitorscompetitors, create memory stubSibling skill suggestions:
Next steps:
- Run
competitor-positioningto analyze how competitors present themselves online- Run
company-deep-divefor a full 360 profile on any competitor from this report- Run
meeting-prepif you're meeting with someone at a competitor
Check at startup: echo $CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS
Team mode (flag set): Spawn full teammates instead of sub-agents:
references/competitor-agent-prompt.md with discovered WSAs —
teammates can message each other when they find overlapping signalsSolo mode (flag not set): Standard sub-agent flow from Step 3.
See references/nimble-playbook.md for the standard error table (missing API key, 429,
401, empty results, extraction garbage). Skill-specific errors:
--focus flag. If still failing, retry with a
simplified query (shorter terms, no date filter). Log the failure but don't skip
the competitor.