From startup
Generates market positioning documents, statements, competitive alternatives maps, and category analyses using April Dunford framework with JTBD discovery, Moore statement, and Neumeier Onliness Test. For defining or refining product positioning and differentiation.
npx claudepluginhub ferdinandobons/startup-skill --plugin startupThis skill uses the workspace's default tool permissions.
Market positioning strategy that produces a complete positioning document, Moore + Neumeier positioning statements, competitive alternatives map, and market category analysis. Built on April Dunford's framework, enriched with JTBD discovery and stress-tested with Neumeier's Onliness Test.
Applies April Dunford's framework for product positioning: competitive alternatives, unique value, target markets, category design. Use for launches, repositioning, strategy, messaging.
Guides product positioning with April Dunford's Obviously Awesome five-step framework from competitive alternatives to market category. For value props, differentiation.
Guides founders and teams through structured positioning statement exercise with competitive research, strategic questions, and team alignment to produce concise positioning statement.
Share bugs, ideas, or general feedback.
Market positioning strategy that produces a complete positioning document, Moore + Neumeier positioning statements, competitive alternatives map, and market category analysis. Built on April Dunford's framework, enriched with JTBD discovery and stress-tested with Neumeier's Onliness Test.
INTAKE → RESEARCH (2 parallel waves) → POSITIONING SYNTHESIS
The process: understand the product and its customers, research competitive alternatives and market context, then build positioning through Dunford's 5+1 components. Typical runtime: 10-15 minutes in Claude Code (parallel agents), 20-30 minutes in Claude.ai (sequential).
Default output language is English. If the user writes in another language or explicitly requests one, use that language for all outputs instead.
Short and focused — 1-2 rounds of questions. The goal is enough context to research alternatives and build positioning.
Before asking questions, check if prior sessions have been completed. Look for these files in the working directory or subdirectories:
From startup-design:
00-intake/brief.md — product description and context01-discovery/competitor-landscape.md — competitor profiles01-discovery/target-audience.md — customer personas, pain points02-strategy/positioning.md — initial positioning workFrom startup-competitors:
intake.md — product and market contextcompetitors-report.md — strategic competitive analysisbattle-cards/ — per-competitor profilespricing-landscape.md — pricing analysisIf these files exist, read them and use the data as a head start:
Tell the user: "I found data from a previous session. I'll use it as a starting point for positioning analysis."
Skip redundant intake questions. Go straight to research if prior data is sufficient.
Round 1 — Core context:
Round 2 — Sharpening (only if needed):
Don't over-interview. If the user gives a clear description upfront, move to research. The positioning process itself will surface what matters.
Save to {project-name}/intake.md — a brief summary of the product, problem, alternatives, and customers. If built on prior session data, note the source files used. Project name: kebab-case (e.g., ai-email-assistant).
Create {project-name}/PROGRESS.md with: project name, skill name (startup-positioning), start date, language, research mode (Live / Knowledge-Based), and a phase checklist. Update it after each phase completes. If PROGRESS.md already exists from a previous session, resume from the last incomplete phase.
After intake, assess market complexity and present the Research Depth recommendation to the user.
Reference: Read
references/research-scaling.mdfor the complexity scoring matrix, tier definitions, wave configurations, and the user communication template.
research-scaling.md for the exact template)The selected tier determines the number of agents per wave and search rounds per agent in Phase 2. See research-scaling.md for exact wave configurations per tier.
Two parallel research waves exploring competitive alternatives and market context. Together they provide the raw material for Dunford's 5+1 positioning components.
Check if the Agent tool is available:
This skill requires WebSearch for real data. If WebSearch is unavailable or denied, fall back to Knowledge-Based Mode: use training data, mark all findings with [Knowledge-Based — verify independently], and reduce confidence ratings by one level. Note the mode in PROGRESS.md.
Reference: Read
references/research-principles.mdbefore starting any wave. It defines source quality tiers, cross-referencing rules, and how to handle data gaps.
Reference: Read
references/research-wave-1-alternatives.mdfor agent templates.
Two agents (or two sequential blocks):
A1: Alternative Mapping (JTBD Lens) — Map ALL competitive alternatives, not just direct competitors. Include: direct competitors, adjacent tools competing for the same budget, manual processes, spreadsheets, hiring someone, doing nothing / status quo. For each: what job does the customer hire it for, where does it fall short, what triggers switching? The goal is the full set of things your product replaces.
A2: Customer Intelligence — Mine voice-of-customer data: reviews, forums, communities. Extract: pain points with current alternatives, exact language customers use, what "better" means to them, best-fit customer profile (who gets the most value fastest), switching triggers (what makes someone finally change). Build a language map — the words customers use to describe their problem and desired outcome.
Reference: Read
references/research-wave-2-market-frame.mdfor agent templates.
Two agents (or two sequential blocks):
B1: Market Category Analysis — Identify 3-5 candidate market categories. For each: what do buyers expect from this category, who are the leaders, what's the competitive dynamic, how mature is it? Apply Dunford's category types: head-to-head (existing category), big fish/small pond (subcategory), or category creation. Assess which frame makes your unique strengths matter most.
B2: Trend & Timing Analysis — Identify relevant trends: technology shifts, behavioral changes, regulatory moves. For each: is it real or hype, how does it affect buyer expectations, does it make your positioning stronger or weaker? Assess timing — are you early, on-time, or late to the trend? Only include trends that genuinely change how buyers evaluate solutions.
After both waves complete, before synthesis, briefly present what the research found to the user: the competitive alternative landscape (how many direct, adjacent, status quo), the strongest customer pains, and the most promising category candidates. Ask: "Does this align with your expectations? Anything to adjust before I synthesize the positioning?"
Keep it to one message — this is a quick alignment check, not a full report.
Reference: Read
references/research-synthesis.mdfor synthesis protocol and Dunford process details.
After the checkpoint, build positioning through Dunford's 5+1 components in order. The sequence matters — each step builds on the previous.
Competitive Alternatives — From Wave 1. What would customers use if your product didn't exist? This is the anchor — positioning is always relative.
Unique Attributes — What do you have that the alternatives lack? Be specific and honest. Features, architecture, team expertise, business model, speed — anything defensible.
⏸ PAUSE — User Input Required. Present the research-derived attributes to the user. Ask them to confirm, add, or remove before proceeding to Value Themes. The founder knows capabilities that research can't surface.
Value Themes — Translate each unique attribute into a customer outcome. Attribute → "so what?" → value. Group related attributes into 2-3 value themes. Use customer language from Wave 1's language map.
Best-Fit Customers — From Wave 1 customer intelligence. Who cares most about your value themes? Define by characteristics that make them care, not demographics. These customers should be reachable, recognizable, and willing to pay.
Market Category — From Wave 2. Choose the category frame that makes your value obvious. Present 3-5 options with trade-offs. Recommend one. The right category triggers the right buyer expectations.
Trend Overlay (optional) — From Wave 2. Only include if a genuine trend makes your positioning stronger. Forced trend alignment is worse than none.
Two stress tests before finalizing:
Neumeier Onliness Test:
Basic form:
"Our [product] is the only [category] that [differentiator]."
Extended form (6 elements — WHAT/HOW/WHO/WHERE/WHY/WHEN):
"Our [product] is the only [category] that [differentiator] for [target] who [need] in [context]."
If you can't fill the basic form convincingly — if "only" feels like a stretch — the positioning is too weak. Iterate.
Ries/Trout Mental Ladder:
If either test fails, revisit the 5+1 components. Don't ship weak positioning.
Every deliverable file must start with a standardized header: # {Title}: {product} followed by *Skill: startup-positioning | Generated: {date}*. Every deliverable must end with Red Flags, Yellow Flags, and Sources sections (see templates in references/research-synthesis.md).
{project-name}/positioning-doc.md — The main deliverable:
{project-name}/positioning-statement.md — Statements and messaging:
{project-name}/competitive-alternatives.md — Complete alternatives map:
{project-name}/market-category-analysis.md — Category strategy:
{project-name}/messaging-implications.md — Bridge from positioning to copy:
Each agent saves its raw output to {project-name}/raw/. The synthesis phase reads these raw files and produces the polished deliverables above. Agents must NOT write directly to deliverable paths — raw and synthesized output are separate.
Raw research files:
alternative-mapping.mdcustomer-intelligence.mdmarket-categories.mdtrends-timing.mdAfter all positioning deliverables are written, run a verification pass.
Reference: Read
references/verification-agent.mdfor the full verification protocol, universal checks, and skill-specific checks.
{project-name}/verification-report.mdIn Claude.ai or when Agent tool is unavailable, run the verification checks yourself in the main conversation following the same protocol.
Reference: Read
references/honesty-protocol.mdfor full protocol and anti-pattern details.
Positioning is only useful if it's honest. Core rules apply (label claims, quantify, declare gaps), plus positioning-specific additions:
| Anti-Pattern | What It Looks Like | What to Say |
|---|---|---|
| "We're for everyone" | No target segment defined | "If you're for everyone, you're for no one. Who cares MOST?" |
| Feature-based positioning | Leading with features not outcomes | "Customers don't buy features. What outcome do they get?" |
| Aspirational positioning | "We'll be the AI-powered..." | "Position on what you deliver today, not the roadmap." |
| Category-of-one | Inventing a category to avoid comparison | "New categories cost millions. Is there an existing frame?" |
| Copycat positioning | Same message as the market leader | "Find genuinely different ground — you can't out-position the leader." |
See references/honesty-protocol.md for the full anti-pattern table (7 entries) and detailed protocol.
Read only what you need for the current phase.
| File | When to Read | ~Lines | Purpose |
|---|---|---|---|
honesty-protocol.md | Start of session | ~73 | Full honesty protocol with anti-patterns |
research-principles.md | Before starting Phase 2 | ~65 | Source quality, cross-referencing, data gaps |
research-wave-1-alternatives.md | When running Wave 1 | ~235 | Agent templates for alternatives + customer intel |
research-wave-2-market-frame.md | When running Wave 2 | ~210 | Agent templates for categories + trends |
research-synthesis.md | After both waves complete | ~380 | Synthesis protocol, Dunford process, validation tests, messaging implications |
frameworks.md | During Phase 3 | ~133 | Dunford/Moore/Neumeier/JTBD/Ries reference |
research-scaling.md | After intake, before Phase 2 | ~75 | Complexity scoring, tier definitions, wave configurations |
verification-agent.md | After synthesis | ~80 | Verification protocol, universal + skill-specific checks |