From startup
Analyzes competitors' products, pricing, customer sentiment, GTM strategy, and growth signals using web data. Produces battle cards, pricing landscapes, and feature matrices for competitive intelligence.
npx claudepluginhub ferdinandobons/startup-skill --plugin startupThis skill uses the workspace's default tool permissions.
Deep competitive intelligence that goes beyond surface-level profiles. Produces actionable battle cards, pricing landscape analysis, and strategic vulnerability mapping using real web data.
Analyzes competitive landscape for your product: identifies 5 direct competitors, profiles strengths/weaknesses/pricing/GTM, maps differentiation opportunities. Use for market research or briefs.
Conducts competitor analysis including SWOT, feature matrices, pricing comparisons, teardowns; creates battle cards, positioning maps, and SEO-optimized comparison pages.
Generates competitive analysis briefs for competitors or feature areas via web research, with overviews, feature matrices, positioning, strengths/weaknesses, opportunities, and threats.
Share bugs, ideas, or general feedback.
Deep competitive intelligence that goes beyond surface-level profiles. Produces actionable battle cards, pricing landscape analysis, and strategic vulnerability mapping using real web data.
INTAKE → RESEARCH (3 parallel waves) → SYNTHESIS → BATTLE CARDS
The process is focused: understand the product, research competitors deeply across 3 dimensions, synthesize findings, and produce actionable output. Typical runtime: 15-25 minutes in Claude Code (parallel agents), 30-45 minutes in Claude.ai (sequential).
Default output language is English. If the user writes in another language or explicitly requests one, use that language for all outputs instead.
Short and focused — 1-2 rounds of questions, not an extended interview. The goal is just enough context to run targeted research.
Before asking questions, check if a startup-design session has already been completed for this project. Look for these files in the working directory or subdirectories:
01-discovery/competitor-landscape.md — competitor profiles and analysis01-discovery/market-analysis.md — market size, trends, regulatory01-discovery/target-audience.md — customer personas, pain points00-intake/brief.md — product description and contextIf these files exist, read them and use the data as a head start:
competitor-landscape.md as the starting point for deeper analysis (startup-design profiles 5-8 competitors at surface level — this skill goes much deeper on each)market-analysis.md to contextualize the competitive landscapetarget-audience.md to focus the sentiment mining on what matters mostTell the user: "I found data from a previous startup-design session. I'll use it as a starting point and go deeper on the competitive analysis."
Skip the intake interview entirely if the startup-design files provide enough context. Go straight to research.
Round 1 — The basics:
Round 2 — Sharpening (only if needed):
Don't over-interview. If the user gives a clear description upfront, skip straight to research. The competitive analysis itself will surface what matters.
Save to {project-name}/intake.md — a brief summary of the product, market, and known competitors. If built on startup-design data, note the source files used. The project name should be derived from the product/market (kebab-case, e.g., ai-email-assistant).
Create {project-name}/PROGRESS.md with: project name, skill name (startup-competitors), start date, language, research mode (Live / Knowledge-Based), and a phase checklist. Update it after each phase completes. If PROGRESS.md already exists from a previous session, resume from the last incomplete phase.
After intake, assess market complexity and present the Research Depth recommendation to the user.
Reference: Read
references/research-scaling.mdfor the complexity scoring matrix, tier definitions, wave configurations, and the user communication template.
research-scaling.md for the exact template)The selected tier determines the number of agents per wave and search rounds per agent in Phase 2. See research-scaling.md for exact wave configurations per tier.
Three parallel research waves, each attacking the competitive landscape from a different angle. Together they produce a 360-degree view.
Check if the Agent tool is available:
This skill requires WebSearch for real data. If WebSearch is unavailable or denied, fall back to Knowledge-Based Mode: use training data, mark all findings with [Knowledge-Based — verify independently], and reduce confidence ratings by one level.
Reference: Read
references/research-principles.mdbefore starting any wave. It defines source quality tiers, cross-referencing rules, and how to handle data gaps.
Reference: Read
references/research-wave-1-profiles-pricing.mdfor agent templates.
Two agents (or two sequential blocks):
A1: Competitor Deep-Dives — Identify and profile 5-8 direct competitors plus 2-3 adjacent solutions (broader platforms, manual alternatives, tools from neighboring categories that compete for the same budget). For each: product, features, team size, funding, traction signals, strengths, weaknesses. Go beyond their marketing page — check reviews, job postings, and funding data.
A2: Pricing Intelligence — For each competitor: reverse-engineer the pricing model. Not just "it costs $49/mo" but: what's the value metric (per seat? per usage? flat?), how do tiers differentiate, what pricing psychology do they use (anchoring, decoy, charm pricing), what's the switching cost (technical, contractual, emotional). Build a tier-by-tier comparison.
Reference: Read
references/research-wave-2-sentiment-mining.mdfor agent templates.
Two agents (or two sequential blocks):
B1: Review Mining — Mine G2, Capterra, TrustRadius, Product Hunt, and App Store reviews for each competitor. Extract patterns: what do people praise? What do they complain about? What features do they request? Organize by competitor and by pain theme. Include verbatim quotes.
B2: Forum & Community Mining — Mine Reddit, Indie Hackers, Hacker News, Quora, and niche communities. Find: complaints about existing tools, "what do you use for X?" threads, migration stories, workaround discussions. Build a language map — the exact words customers use to describe their problems and desires. Identify churn signals — why people leave each competitor.
Reference: Read
references/research-wave-3-gtm-signals.mdfor agent templates.
Two agents (or two sequential blocks):
C1: Go-to-Market Analysis — For each competitor: primary acquisition channel, sales motion (self-serve vs. sales-led), content strategy (blog frequency, topics, quality), social presence, paid advertising signals, partnership plays. Build a channel opportunity map showing competitor saturation vs. opportunity per channel.
C2: Strategic & Growth Signals — Funding trajectory (rounds, investors, timing), hiring patterns (engineering-heavy = building, sales-heavy = scaling, support-heavy = struggling), content/SEO footprint (what keywords they rank for, where the gaps are), product roadmap signals from changelogs and public statements. Identify content pillars each competitor owns and which topics nobody covers well.
After all three waves complete, before synthesis, briefly present what the research found to the user: how many competitors were profiled, the top customer pain themes, the most notable strategic signals (funding, hiring, GTM patterns). Ask: "Does this align with your expectations? Any competitors to add or remove before I synthesize?"
Keep it to one message — this is a quick alignment check, not a full report.
Reference: Read
references/research-synthesis.mdfor synthesis protocol and battle card template.
After the checkpoint, synthesize raw findings into strategic deliverables. This step creates the real value — it's not reporting, it's pattern-matching across data sources.
Every deliverable file must start with a standardized header: # {Title}: {product} followed by *Skill: startup-competitors | Generated: {date}*. Every deliverable must end with Red Flags, Yellow Flags, and Sources sections.
{project-name}/competitors-report.md — The main deliverable:
{project-name}/competitive-matrix.md — Feature comparison table:
{project-name}/pricing-landscape.md — Dedicated pricing analysis:
{project-name}/battle-cards/{competitor-name}.md — One per competitor:
Keep raw research files in {project-name}/raw/ for reference:
competitor-profiles.mdpricing-intelligence.mdreview-mining.mdforum-mining.mdgtm-analysis.mdstrategic-signals.mdAfter synthesis completes and all deliverable files are written, run a verification pass.
Reference: Read
references/verification-agent.mdfor the full verification protocol, universal checks, and skill-specific checks.
{project-name}/verification-report.mdIn Claude.ai or when Agent tool is unavailable, run the verification checks yourself in the main conversation following the same protocol.
Reference: Read
references/honesty-protocol.mdfor full protocol and anti-pattern details.
Competitive intelligence is only useful if it's honest. Core rules apply (label claims, quantify, declare gaps), plus competitive-intelligence-specific additions:
See references/honesty-protocol.md for the full anti-pattern table (6 entries) and detailed protocol.
Read only what you need for the current phase.
| File | When to Read | ~Lines | Purpose |
|---|---|---|---|
honesty-protocol.md | Start of session | ~72 | Full honesty protocol with anti-patterns |
research-principles.md | Before starting Phase 2 | ~54 | Source quality, cross-referencing, data gaps |
research-wave-1-profiles-pricing.md | When running Wave 1 | ~186 | Agent templates for profiles + pricing |
research-wave-2-sentiment-mining.md | When running Wave 2 | ~189 | Agent templates for review + forum mining |
research-wave-3-gtm-signals.md | When running Wave 3 | ~192 | Agent templates for GTM + strategic signals |
research-synthesis.md | After all waves complete | ~231 | How to synthesize + battle card template |
research-scaling.md | After intake, before Phase 2 | ~80 | Complexity scoring, tier definitions, wave configurations |
verification-agent.md | After synthesis | ~85 | Verification protocol, universal + skill-specific checks |