Analyze competitors for portfolio propositions — competitive landscape, battle cards, positioning, differentiation. Use whenever the user mentions competitors, competitive analysis, "who else does this", SWOT, win/loss, how a proposition stacks up, or wants to understand competitive positioning in a market — even if they don't say "compete" explicitly.
From cogni-portfolionpx claudepluginhub cogni-work/insight-wave --plugin cogni-portfolioThis skill is limited to using the following tools:
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Analyzes competition with Porter's Five Forces, Blue Ocean Strategy, and positioning maps to identify differentiation opportunities and market positioning for startups and pitches.
Analyze the competitive landscape for each proposition (Feature x Market combination). Competitors are proposition-specific because the same feature competes against different players in different markets.
Competitive analysis is scoped to propositions, not features or markets alone. A "cloud monitoring" feature may compete against Datadog in mid-market SaaS but against Splunk in enterprise fintech. The competitive positioning and differentiation are always market-dependent.
List existing propositions (read the propositions/ directory in the project root) and identify those without corresponding competitor files in competitors/. If no propositions exist yet, tell the user they need to create propositions first (via the propositions skill) before competitive analysis can begin.
Present options to the user:
For each selected proposition, identify 3-5 relevant competitors. Three sources:
Internal context (pre-research): Before web research, check for context/context-index.json. Read entries in by_relevance["compete"] or by_category["competitive"]. Internal battlecards, win/loss reports, and RFP analyses provide ground truth that web research cannot find. Pass any matching context to the competitor-researcher agent as additional input alongside the proposition.
Web research (default): Use the Agent tool to delegate to the competitor-researcher agent, which searches for:
Always include plugin_root: $CLAUDE_PLUGIN_ROOT in the agent task prompt. Also pass the customer profile path (customers/{market-slug}.json) if it exists — the researcher uses buyer buying_criteria and pain_points to ground differentiation statements and trap questions in how this market's buyer actually evaluates vendors. Multiple agents can be launched in parallel for different propositions.
LLM knowledge (fallback): When web search is unavailable, identify known competitors based on the feature category and market segment. Clearly note that competitor data is based on training knowledge and may not reflect latest positioning.
For each competitor, capture:
Write to competitors/{feature-slug}--{market-slug}.json (same slug as the proposition):
{
"slug": "cloud-monitoring--mid-market-saas",
"proposition_slug": "cloud-monitoring--mid-market-saas",
"competitors": [
{
"name": "Datadog",
"source_url": "https://example.com/datadog-review",
"positioning": "Full-stack observability for cloud-scale companies",
"strengths": ["Brand recognition", "Broad integrations"],
"weaknesses": ["Expensive at scale", "Overkill for mid-market"],
"differentiation": "40% lower cost, deploys in hours vs. weeks, purpose-built for mid-market."
}
],
"trap_questions": [
"Question targeting a verifiable competitor gap — max 3-4 questions total"
],
"created": "2026-01-25"
}
After generating competitor files, run an automated review loop with two stakeholder perspectives before presenting to the user. This catches accuracy gaps, biased positioning, and weak trap questions before they reach the buyer conversation.
Launch two reviewer subagents in parallel for each competitor file:
CSO Reviewer (tsystems-cso-reviewer): Evaluates competitive intel from a sales effectiveness perspective — can an AE use this differentiation to win deals? Are trap questions usable in real evaluations? Does the competitive positioning give the account team ammunition?
Market Industry Analyst (market-industry-analyst-reviewer): Evaluates from an advisory accuracy perspective — are the right competitors identified? Is positioning accurate and current? Are strength/weakness claims balanced and evidence-based? Would the analysis survive peer review?
Pass the competitor file(s) and their parent proposition file(s) as context — reviewers need to understand what capability and market the competitive analysis targets.
Both reviewers return scored assessments. The competitor file passes when:
would_use_in_pitch_deck is truewould_use_in_advisory_report is trueIf thresholds are met: proceed to Step 6 (present to user).
When the review loop detects failures:
Synthesize feedback from both reviewers into targeted rewrite instructions. Map failing dimensions to specific competitor file fields:
competitive_win_ability or differentiation_defensibility → rewrite competitors[].differentiationmarket_landscape_accuracy or segment_relevance → re-research competitor selection, add missing playersstrength_weakness_balance or positioning_validity → rewrite competitors[].positioning, .strengths, .weaknessestrap_question_sophistication or objection_handling → rewrite trap_questionsRe-invoke competitor-researcher in revision mode with the synthesized feedback. The researcher targets specific entries for improvement rather than regenerating from scratch.
Re-review the updated competitor file with both reviewers.
Max 3 iterations. If convergence is not reached after 3 rounds, present the best-scoring version to the user with a summary of unresolved issues and the reviewer scores.
Write convergence log to convergence.json alongside the competitor file:
{
"converged": true,
"reason": "passed",
"iterations": [
{"iteration": 0, "cso_avg": 3.67, "analyst_avg": 3.5, "combined_avg": 3.58, "passes": false},
{"iteration": 1, "cso_avg": 4.17, "analyst_avg": 4.0, "combined_avg": 4.08, "passes": true}
],
"rewrite_actions": ["what was fixed between each iteration"]
}
After the review loop converges (or reaches max iterations):
Present competitor analysis per proposition, then offer:
Wait for the user's explicit response. If they choose (a), delegate to the dashboard-refresher agent with project_dir and plugin_root: $CLAUDE_PLUGIN_ROOT to generate a dashboard snapshot, then ask again if they're ready to proceed.
The user may know competitors the research missed, or may disagree with positioning claims. Iterate until accurate.
For each competitor file, include a trap_questions array with 3-4 questions designed to expose competitor weaknesses during an RFP evaluation or vendor comparison. Good trap questions:
buying_criteria — these are the evaluation dimensions the buyer already uses, making trap questions feel like natural procurement due diligence rather than vendor-planted gotchasDo not generate more than 4 trap questions — focus beats volume. Each question should be a single sentence.
Strong differentiation statements:
portfolio.json in the project root. If a language field is present, generate all user-facing text content (positioning, strengths/weaknesses, differentiation statements) in that language. JSON field names and slugs remain in English. If no language field is present, default to English.portfolio.json has a language field, communicate with the user in that language (status messages, instructions, recommendations, questions). Technical terms, skill names, and CLI commands remain in English. Default to English if no language field is present.$CLAUDE_PLUGIN_ROOT/skills/portfolio-setup/references/data-model.md for complete entity schemas