Researches individual competitors via web search to gather pricing, features, positioning, and reviews. Activates when the user wants to research a competitor, gather intel on a company, or asks 'what does [company] offer?' Covers multi-source intelligence gathering across official sites, review platforms, and community discussions.
From founder-osnpx claudepluginhub thecloudtips/founder-os --plugin founder-osThis skill uses the workspace's default tool permissions.
references/query-patterns.mdDesigns and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Enables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Gather competitive intelligence on companies via systematic web search. This skill provides a surface-scan research strategy that retrieves current pricing, features, positioning, customer reviews, and recent news for any competitor in 4-6 targeted searches.
Execute 4-6 targeted searches per competitor to gather intelligence across five dimensions. This surface scan approach balances comprehensiveness with speed — enough queries to capture the key facts without exhaustive crawling.
Research dimensions (in order):
For company name input (e.g., "Notion"), construct the domain from the company name or find it in results. For URL input (e.g., "https://notion.so"), extract the company name and domain from the URL.
When the user provides multiple competitors (e.g., /founder-os:compete:matrix Notion Linear Asana), apply this skill independently to each company and merge outputs into a comparison structure. Process companies sequentially — complete all 5 dimensions for one company before starting the next to avoid cross-contamination of findings.
Use site: operators to target high-quality sources directly rather than relying on general search to surface them:
site:g2.com {{company_name}}, site:capterra.com {{company_name}}{{company_name}} pricing 2026, site:{{domain}} pricingsite:{{domain}} (homepage), site:{{domain}} about{{company_name}} funding OR launch OR "product update" 2026site: when casting a wider net (e.g., features often spread across documentation, blog posts, and landing pages)."product update", "contact sales")."Mercury" bank startup not just Mercury).Consult ${CLAUDE_PLUGIN_ROOT}/skills/compete/competitive-research/references/query-patterns.md for full templated queries for each dimension, including site-specific variants and matrix query patterns for multi-competitor research.
Extract pricing data and normalize to a comparable format:
Normalized format: $X/user/month (billed annually) or $X/month flat or Contact Sales.
When pricing is ambiguous or contradictory across sources, use the official product website as ground truth and flag discrepancies.
Categorize extracted features into three buckets:
Source features from: product pages, feature comparison pages, G2 review "pros" sections, ProductHunt listings, and changelog or blog posts. Do not infer features that are not explicitly documented — mark uncertain claims with "(unverified)".
Extract and record for each platform found:
Normalize all scores to a /5.0 scale for cross-competitor comparison. G2 and Capterra are natively /5.0. ProductHunt uses upvotes — record upvote count separately and do not convert to a score. Compute a composite score by averaging all available /5.0 platform scores, weighted equally.
When review counts are low (<50 reviews on a platform), flag the score as low-confidence.
From the competitor's homepage and about page, extract:
Note the tone of the messaging (technical, aspirational, pragmatic, community-driven) — this signals where they position on the market sophistication spectrum.
Surface and record:
Limit to the last 90 days. Record each item with: date, headline (paraphrased if necessary), source, and a one-line significance note (e.g., "Signals move into enterprise segment" or "Reduces pricing moat vs. lower-tier competitors").
Evaluate sources by reliability:
Prefer Tier 1-2 sources for pricing and feature claims. Accept Tier 3 for sentiment and complaint pattern analysis — acknowledge the source tier when citing. Discard Tier 4 sources unless no other source is available, in which case flag explicitly.
When the same fact appears across multiple sources:
Apply recency filters strictly:
After completing research for one competitor, structure findings as a data object. This object is consumed by the market-analysis skill or report generation commands:
competitor_data:
company: [name]
domain: [url]
research_date: [YYYY-MM-DD]
pricing:
model: [per_seat | flat | usage | freemium | enterprise_only | hybrid]
plans:
- name: [plan name]
price_monthly: [number or "Contact Sales"]
billing: [monthly | annual | annual_only]
notes: [key limitations or inclusions]
free_tier: [description or null]
enterprise_pricing: [true | false]
annual_discount_pct: [number or null]
features:
core:
- [feature name]
differentiating:
- feature: [name]
why_it_matters: [one-line explanation]
gaps:
- [missing or weak area]
reviews:
composite_score: [X.X / 5.0]
platforms:
- name: [G2 | Capterra | ProductHunt | Trustpilot]
score: [X.X / 5.0 or upvote count]
review_count: [number]
low_confidence: [true | false]
praise_themes:
- [theme]
complaint_themes:
- [theme]
positioning:
tagline: [verbatim H1 text]
value_prop: [subheadline text]
target_audience: [description]
messaging_angle: [core problem or transformation]
proof_elements: [logos, user counts, awards]
tone: [technical | aspirational | pragmatic | community-driven]
news:
recent_items:
- date: [YYYY-MM-DD]
headline: [text]
source: [publication or URL]
significance: [one-line note]
When running /founder-os:compete:matrix, produce one competitor_data object per company and pass the full array to the matrix formatter.
Apply these fallbacks when data is unavailable:
price_monthly: "Not publicly listed" and note the last search attempted.composite_score: null and note "no review platform data found".status: "limited_data" in the output object.Never fabricate pricing, feature claims, or review scores. When uncertain, use the phrase "unverified" and recommend the user check the source directly.