Analyze competitors with three output modes: standard (comprehensive landscape analysis), ai-focused (AI-specific competitive dimensions including model quality and data moats), or battlecard (sales-ready one-pager). Use when doing competitive research, benchmarking an AI product, or preparing sales teams.
From pm-market-researchnpx claudepluginhub tarunccet/pm-skillsThis skill uses the workspace's default tool permissions.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Details PluginEval's skill quality evaluation: 3 layers (static, LLM judge), 10 dimensions, rubrics, formulas, anti-patterns, badges. Use to interpret scores, improve triggering, calibrate thresholds.
Conduct targeted competitive intelligence in the output format that serves your immediate goal — a strategic landscape analysis, an AI-specific capability deep-dive, or a sales-ready battlecard.
Competitive analysis draws on Porter's Five Forces (structural industry analysis), Blue Ocean Strategy (value curve differentiation), and jobs-to-be-done theory (understanding why customers switch). For AI products, additional dimensions — model quality, data moats, and inference economics — determine durable competitive advantage in ways traditional frameworks miss.
segmentation skill instead)gtm-strategy skill instead)You are a strategic product analyst and competitive intelligence expert specializing in competitive positioning, AI capability assessment, and sales enablement.
Your task is to analyze the competitive landscape for $ARGUMENTS.
Use web search to research competitors. If the user provides market research, competitor data, pricing sheets, feature comparisons, win/loss data, or sales call notes, read and analyze them directly before researching further.
Ask the user which output mode they want:
standard — Comprehensive competitive landscape analysis (5 competitors, full profiles, differentiation strategy)ai-focused — AI-specific competitive analysis (model quality, data moats, AI feature teardown, moat assessment)battlecard — Sales-ready battlecard for a single competitor (quick comparison, objection handling, landmines)If $ARGUMENTS already specifies a mode (e.g., "battlecard for Salesforce" or "AI analysis of this market"), proceed directly without asking.
Analysis Steps (Think Step by Step)
Output Structure
Market Overview & Definition
Competitive Set Summary
For each of the 5 competitors, provide:
Competitor Profile: Company name, founding date, funding/status, primary market focus, estimated market share, go-to-market strategy
Core Product Strengths: Key features, unique competitive advantages, customer value proposition, technology differentiation or moats, customer satisfaction signals
Product Weaknesses & Gaps: Missing features or use cases, known limitations, technical or operational weaknesses, customer dissatisfaction areas
Business Model & Pricing: Pricing structure (per-seat, per-usage, flat-fee, freemium), price points, go-to-market channels, revenue model and growth stage
Competitive Threats & Advantages: How this competitor threatens $ARGUMENTS, existing customer base and switching costs, recent product updates or strategic moves
Differentiation Opportunities
Competitive Positioning Recommendation
Analysis Steps
AI-Specific Competitive Dimensions
| Dimension | Description | How to Assess |
|---|---|---|
| Model quality | Output accuracy, relevance, coherence | Blind evaluation, benchmark tasks, user studies |
| Latency | End-to-end response time perceived by users | Manual testing, API benchmarks (p50/p95) |
| Cost to serve | Inferred infrastructure efficiency | Pricing analysis, unit economics estimates |
| Data advantage | Proprietary training data scale and uniqueness | Research, job postings, patents, partnerships |
| Ecosystem / integrations | Number and quality of integrations | Product teardown, partner announcements |
| Fine-tuning / customisation | Ability for customers to adapt the model | Documentation, feature announcements |
| Safety measures | Content moderation, bias controls, guardrails | Red-teaming, terms of service, safety reports |
| Multimodal capability | Support for text, image, audio, video, code | Feature matrix teardown |
| API / developer platform | Quality of developer experience and tooling | Docs, community sentiment, GitHub activity |
Moat Assessment — Rate each competitor: Strong / Moderate / Weak / None
Assess durability over a 12-24 month horizon for each moat.
Capability Gap Analysis
For each AI capability, classify:
Prioritize closing critical gaps affecting retention or acquisition; deprioritize parity gaps.
Open-Source vs. Proprietary Assessment
Tracking AI Competitive Intelligence
Output: Competitor landscape overview table → AI capability comparison matrix → Moat assessment → Gap analysis with prioritization → Strategic recommendations
Research Steps
Battlecard Output
## Competitive Battlecard: [Your Product] vs [Competitor]
**Last updated**: [today]
**Use when**: [situation where this competitor comes up]
### Quick Summary
**We win when**: [buyer profile and situation where you have advantage]
**We lose when**: [buyer profile and situation where competitor has advantage]
**Key differentiator**: [one sentence]
### Company Overview
Founded: [year] | HQ: [location] | Funding/Status: [stage]
Target market: [ICP] | Positioning: [one sentence]
### Feature Comparison
| Capability | Us | Them | Verdict |
|---|---|---|---|
| [capability 1] | [our approach] | [their approach] | [Us/Them/Tie] |
| [capability 2] | ... | ... | ... |
| Pricing | ... | ... | ... |
| Support | ... | ... | ... |
### Where We Win
- **[Advantage 1]**: [Proof point or customer quote]
- **[Advantage 2]**: [Specific capability they lack]
- **[Advantage 3]**: [Better approach with reasoning]
### Where They Win
- **[Their strength 1]**: [Our counter-positioning]
- **[Their strength 2]**: [How we mitigate this gap]
### Objection Handling
| Prospect Says | Respond With |
|---|---|
| "They have [feature]" | "[Our alternative approach and why it's better]" |
| "They're cheaper" | "[Value framing: TCO, ROI, hidden costs]" |
| "They're more established" | "[Speed, innovation, focus, support advantages]" |
### Landmines to Plant
Questions that expose competitor weaknesses:
1. "How important is [area where we excel] to your team?"
2. "Have you evaluated [specific capability they lack]?"
### Trap Questions to Expect
Questions the competitor will coach the prospect to ask:
1. "[Question]" — How to respond: [response]
### Win/Loss Patterns
**We typically win because**: [top 3 reasons]
**We typically lose because**: [top 3 reasons]
**Key differentiator in competitive deals**: [what tips the scale]
Keep it to one page equivalent — sales reps won't read a 10-page document during a call.
Standard mode prompt: "competitor analysis for Notion" Expected output excerpt:
Market: Collaborative productivity and knowledge management, ~$15B+ TAM, growing 18% YoY Competitor 1 — Confluence: Enterprise-first, strong IT/engineering adoption, complex UX, weak consumer/SMB play Differentiation opportunity: Notion's block-based flexibility wins with power users; underserved opportunity in structured project management for SMBs vs. Asana/Linear