From flywheel
Work through April Dunford's positioning framework in sequence. Produces a positioning canvas that downstream skills (/fw:copy, /fw:grow) read automatically. Use when positioning a product, company, or offer for the first time, or when revisiting positioning after a significant change. Supports multi-canvas portfolios via the --canvas flag and an optional values-fit check via the --values-check flag.
npx claudepluginhub untangling-systems/flywheel --plugin flywheelThis skill uses the workspace's default tool permissions.
<positioning_context> #$ARGUMENTS </positioning_context>
Verifies tests pass on completed feature branch, presents options to merge locally, create GitHub PR, keep as-is or discard; executes choice and cleans up worktree.
Guides root cause investigation for bugs, test failures, unexpected behavior, performance issues, and build failures before proposing fixes.
Writes implementation plans from specs for multi-step tasks, mapping files and breaking into TDD bite-sized steps before coding.
<positioning_context> #$ARGUMENTS </positioning_context>
Work through April Dunford's 5-component positioning framework. No shortcuts. The output is a positioning canvas that becomes the foundation for copy, growth experiments, and quarterly planning.
This skill writes the positioning canvas. Path resolution order:
--canvas <path> in the user's arguments. Use that path. If the file doesn't exist, this becomes a fresh canvas at that path.--canvas, scan docs/positioning/ for .md files (excluding portfolio.md and archive/). If exactly one exists, use it. If multiple exist, list them and ASK which canvas the user is working in (or whether to create a new one).docs/positioning/current.md.All references below to docs/positioning/current.md should substitute the resolved canvas path. The docs/positioning/archive/ path stays fixed — it's shared history across canvases.
Multi-canvas portfolios (e.g., a consultancy with three distinct service tracks, each with its own segments and frame) should keep one canvas per coherent positioning, not one giant canvas trying to serve all of them.
--values-check flag)Standard Dunford produces a defensible canvas. It does not check whether the canvas describes work the founder actually wants to do. For VC-backed teams shipping a clear product, this is fine — the value engine and the founder's life are usually decoupled. For solo founders, consultants, and lifestyle businesses, the positioning IS the founder's daily work, and a defensible canvas pointed at the wrong segment or wrong frame produces a successful business the founder hates running.
The --values-check flag adds short values prompts to Steps 1, 2, 4, and 5, plus a closing Values-Fit Sanity Check before the canvas is assembled. It is off by default — opt in.
When to use it:
When to skip it:
If --values-check is not in the arguments, run the standard sequence and skip every section labeled "Values check" below.
/fw:copy drift detection or /fw:grow experiments/fw:copy or /fw:grow — positioning must come firstSearch for docs/positioning/current.md in the user's project.
If it exists: Read it in full and determine the session mode:
Triggered when the user's argument contains "revise", names a specific section (e.g., "revise segments", "update alternatives"), or describes a specific reason for revisiting (e.g., "drift detection keeps flagging speed claims").
Present the current canvas summary, then confirm what's changing:
"You have an existing positioning canvas from [date]. Here's what it says:
- Competitive alternatives: [list]
- Key differentiator: [summary]
- Target segment: [summary]
- Frame: [summary]
You want to revise: [section(s)] Reason: [the triggering insight, if provided]
I'll keep the unchanged sections and walk you through the ones that need updating."
How revision mode works:
| If you change... | Ask whether these still hold... |
|---|---|
| Competitive alternatives (Step 1) | Unique attributes (Step 2) — "Do your differentiators still hold against these new alternatives?" |
| Unique attributes (Step 2) | Value chains (Step 3) — "Do these new attributes change the capability → outcome chains?" |
| Value chains (Step 3) | Customer segments (Step 4) — "Does this new value proposition change who cares most?" |
| Customer segments (Step 4) | Market frame (Step 5) — "Does this segment change affect which frame makes your value obvious?" |
| Market frame (Step 5) | No downstream sections — just update the positioning statement |
Revision mode does NOT skip enforcement. The revised sections get the same specificity requirements as a fresh session. The only thing skipped is re-doing sections that haven't changed and don't need to propagate.
Triggered when no existing canvas exists, or the user explicitly says "start fresh" or "start over."
If starting fresh with an existing canvas, ask whether to archive:
"Should I archive the current canvas to
docs/positioning/archive/[date].mdbefore we start?"
Then proceed to the full sequence (all 5 steps).
When the user runs /fw:position with an existing canvas but doesn't specify revision or fresh:
"You have an existing positioning canvas from [date]. Here's what it says:
- Competitive alternatives: [list]
- Key differentiator: [summary]
- Target segment: [summary]
What do you want to do?
- Revise specific sections — tell me which sections and why
- Start fresh — archive the current canvas and redo all 5 steps
- Review and sharpen — walk through all 5 steps with the current canvas as a starting point"
Option 3 is a guided review: present each section's current content, ask if it needs updating, only run the full step process for sections the user flags.
If it doesn't exist: Proceed directly to the full sequence (Mode B).
Search docs/positioning/ for pattern-*.md files and check the canvas for a ## Learnings section.
If learnings or patterns exist: Surface them before starting:
"Before we begin, here's what past sessions have taught us about your positioning:
- [pattern or learning summary]
- [pattern or learning summary]
Keep these in mind as we work through the revisions."
This is especially important in revision mode — the triggering insight that led to the revision should connect to any existing patterns.
If the user provided arguments (product name, context, or revision target), acknowledge them. If not, prompt:
"What are you positioning? Give me a product name, a company, or a consulting offer — and any context about where you are right now."
Work through all 5 steps in order. Each step builds on the previous one. Use references/enforcement-prompts.md for redirect and warning text when users try to skip steps or give vague answers.
Important: The user may go back and revise earlier steps at any time. This is encouraged — insights from later steps often sharpen earlier ones. Update the working canvas and note what changed.
In revision mode: Only the targeted steps and their downstream propagation checks are worked through. See Mode A above for the propagation rules.
"What would your customers do if you didn't exist? Name at least 3 real alternatives. These can be direct competitors, adjacent tools, manual workarounds, or doing nothing at all."
What you're looking for:
Enforcement triggers:
references/enforcement-prompts.md)When this step is done: You have a table of 3+ named alternatives with reasons customers choose each one.
Values check (--values-check only):
"Look at the competitive set. Is this the company you want to keep? Competing in a market means showing up at the same conferences, talking to the same buyers, getting compared to the same vendors. If the alternatives are ones you'd be embarrassed to be benchmarked against — or ones whose buyers you actively don't want to spend hours with — flag it. The market you choose is the room you live in."
Capture the response in a values_notes field for this step. Don't redirect — just note it; the Values-Fit Sanity Check at the end will surface accumulated tension.
Use AskUserQuestion to present what you've captured and confirm:
"Here are the competitive alternatives I captured: [table]. Anything missing? Any you'd remove?"
"Now look at those alternatives. What do you have that they genuinely lack? Be specific — a feature, a dataset, a method, an integration. Not 'better' — different."
What you're looking for:
Enforcement triggers:
When this step is done: You have a table of unique attributes, each mapped to which alternatives lack them.
Values check (--values-check only):
"These attributes will become the work you're known for. They'll show up in every pitch, every landing page, every conference bio. For each one, ask: 'Do I want to be doing more of this in three years?' A differentiator you don't want to deepen is a treadmill — defensible, but exhausting. Flag any attribute that you're great at but don't want to grow into."
Capture the response in values_notes for this step.
"For each unique attribute, walk me through the value chain: what does it enable the customer to do, and what outcome does that produce? Features → capabilities → outcomes."
What you're looking for:
Enforcement triggers:
When this step is done: You have a value chain table showing attribute → capability → outcome for each unique attribute.
"Who cares most about this specific value? Not your total addressable market — the customers for whom what you just described in Step 3 is a must-have."
What you're looking for:
Enforcement triggers:
When this step is done: You have a segment table with specific descriptions, reasons, and where to find them.
Values check (--values-check only):
"These customers are who you'll spend hours with. Discovery calls, project kickoffs, support conversations, edge cases. For each segment, ask: 'Do I actually like these people? Do I respect what they're trying to do?' A segment you can serve but don't enjoy will produce successful work and a tired founder. If a segment lights you up, name it. If a segment makes you flinch, flag it — even if the WTP looks good."
Capture the response in values_notes for this step.
"Last step. What category or context makes your value obvious? When someone first hears about you, what's the frame that makes them immediately understand what you do — and why your unique attributes are the things that matter?"
What you're looking for:
Enforcement triggers:
When this step is done: You have a frame of reference with reasoning and rejected alternatives.
Values check (--values-check only):
"The frame is how the world will categorize you. It shapes what conferences you're invited to, what podcasts ask you on, what your peers assume you do. Ask: 'Is this the identity I want to carry?' A frame can be defensible and feel like a costume. If the frame is right for the market but wrong for who you want to be in five years, flag it — sometimes the right move is a frame that's slightly less obvious commercially but truer to the work you want to be known for."
Capture the response in values_notes for this step.
--values-check only)Before assembling the canvas, surface the accumulated values_notes from Steps 1, 2, 4, and 5 and ask the question Dunford's framework doesn't:
"Here's what came up across the values checks:
- Step 1 (alternatives): [notes or 'no flags']
- Step 2 (attributes): [notes or 'no flags']
- Step 4 (segments): [notes or 'no flags']
- Step 5 (frame): [notes or 'no flags']
The canvas as it stands is defensible. The question now is whether it's defensible and something you want to live inside. If you executed this positioning hard for two years, would you be proud of the business you'd built? Or would it be technically successful and quietly draining?
Three options:
- Ship it as-is — the values flags are real but you can manage them; the commercial logic outweighs them.
- Sharpen one section — name which step needs revision and what change would resolve the values tension. We'll loop back into that step.
- Park the canvas — the gap between defensible and life-fit is too wide. Note what the canvas got right, archive it, and start fresh with different inputs (different segment, different frame, different alternatives)."
Capture the resolution in a ## Values-Fit Notes section in the assembled canvas. Even when option 1 is chosen, the notes are valuable — they document what the founder knowingly accepted, which protects against later "why did I build this?" drift.
If the user picks option 2, loop back to that step with the values context loaded. If they pick option 3, save the values notes and the partial canvas to docs/positioning/archive/parked-{YYYY-MM-DD}.md so the work isn't lost, then offer to start a fresh /fw:position run.
After all 5 steps are complete (or after the user has been warned about any skipped steps):
references/positioning-canvas-template.md"Here's your positioning canvas. Read through it — does anything feel wrong, too generous, or too vague? This is the document that
/fw:copyand/fw:growwill read, so it needs to be honest."
After user approval:
docs/positioning/current.mdcurrent.md was archived, note that in the canvas frontmatter## Revision History section at the end of the canvas (or append to an existing one):
## Revision History
- [date]: Revised [section(s)]. Reason: [triggering insight]. Changes: [what changed].
This history is valuable context for future sessions — it shows why the positioning evolved.Use AskUserQuestion:
Question: "Positioning canvas saved to docs/positioning/current.md. What next?"
Options:
/fw:copy — Translate this positioning into a specific artifact (landing page, pitch, bio, outreach, talk abstract)/fw:pitch — Build a full sales pitch storyboard grounded in this positioning/fw:monetize — Design pricing and monetization strategy based on the segments and value chains/fw:grow — Design a growth experiment based on this positioning/fw:compound — Save the positioning decisions and reasoning for future referenceSequence is non-negotiable. Steps must be worked in order: alternatives → attributes → value → segments → frame. The framework works because each step constrains the next.
Going back is always allowed. Later steps often reveal that earlier answers need sharpening. Encourage this — it's how the framework is meant to work.
Use the user's language. Don't sanitize their descriptions into marketing speak. If they say "janky spreadsheet workaround," put that in the canvas. Authenticity is more valuable than polish.
Push for specificity, not length. A canvas with 3 crisp alternatives and 2 sharp value chains beats one with 8 vague entries. Quality over quantity.
Surface the reasoning. For founders doing this solo, explain why each step matters before asking them to do it. The framework reasoning is motivation to push through the friction.
The canvas should feel uncomfortable. If the user isn't squirming a little when naming their real competitive alternatives or narrowing their segments, the canvas probably isn't specific enough. That discomfort is the signal it's working.
When invoked with disable-model-invocation context:
--canvas <path> if provided; otherwise apply the Canvas Path Resolution rules silently (single-file: use it; multiple: pick the most-recently-modified non-portfolio canvas and flag the assumption in the output; none: write to docs/positioning/current.md)--values-check if provided; otherwise skip all values-check prompts and the Values-Fit Sanity Check section--values-check is on, capture values_notes per step but do not loop back into revisions — write the Values-Fit Sanity Check as flagged tension in the output for the user to act on later