From flywheel
Translate positioning into specific copy artifacts with drift detection. Reads the positioning canvas and produces grounded drafts where every claim traces back. Use after /fw:position has produced a canvas. Supports multi-canvas portfolios via the --canvas flag.
npx claudepluginhub untangling-systems/flywheel --plugin flywheelThis skill uses the workspace's default tool permissions.
<copy_context> #$ARGUMENTS </copy_context>
Verifies tests pass on completed feature branch, presents options to merge locally, create GitHub PR, keep as-is or discard; executes choice and cleans up worktree.
Guides root cause investigation for bugs, test failures, unexpected behavior, performance issues, and build failures before proposing fixes.
Writes implementation plans from specs for multi-step tasks, mapping files and breaking into TDD bite-sized steps before coding.
<copy_context> #$ARGUMENTS </copy_context>
Translate your positioning canvas into a specific copy artifact — landing page, pitch, bio, outreach, talk abstract, case study, or one-liner. Every claim in the draft is traced back to the canvas. Anything that doesn't trace back gets flagged.
This skill reads a positioning canvas to produce copy. The path is resolved in this order:
--canvas <path> in the user's arguments. Use that path.--canvas, scan docs/positioning/ for .md files (excluding portfolio.md and archive/). If exactly one exists, use it. If multiple exist, list them and ASK which to translate from.docs/positioning/current.md.For all references below to docs/positioning/current.md, substitute the resolved canvas path.
/fw:position has produced a canvas at docs/positioning/current.mdSearch for docs/positioning/current.md.
If it doesn't exist: Hard redirect. See references/enforcement-prompts.md — preflight: no canvas. The skill cannot proceed without a canvas. Do not offer to proceed without one.
If it exists: Read it and proceed to Check 2.
Parse the ## Skip Flags section of the canvas.
If any flags exist: Present the constraints they create. See references/enforcement-prompts.md — preflight: skip flags. Let the user decide whether to proceed or fill gaps first with /fw:position.
If no flags: Proceed.
Before any user interaction, extract the claims inventory from the canvas. This is working material for the drift detector in Step 5. Follow the extraction rules in references/drift-detector-rules.md — Step 1.
Extract:
Store these internally. Do not present the raw inventory to the user yet — it surfaces in Step 2.
Search docs/copy-tests/ for existing files.
If any exist: Summarize briefly:
"You have [N] prior copy tests: [list artifact types and dates]. Want to review any before starting a new one?"
If none: Proceed.
If the user provided an argument (artifact type), validate it against the supported list: landing-page, pitch, bio, outreach, talk-abstract, case-study, one-liner. If valid, carry it into Step 1. If not recognized, ask them to pick from the list or describe a custom artifact.
If no argument provided, proceed to Step 1.
Work through all 6 steps in order. Each step builds on the previous one. Use references/enforcement-prompts.md for redirect and warning text when users try to skip steps or give vague answers.
"What are you writing and who will read it? Pick an artifact type — landing page, pitch, bio, outreach, talk abstract, case study, or one-liner. Then tell me who the specific reader is: not your ICP in general, but the actual person who will see this artifact."
If the artifact type was provided as an argument, confirm it and move to the reader question.
What you're looking for:
Enforcement triggers:
references/enforcement-prompts.md)When this step is done: Artifact type confirmed, target reader described as a specific person with context of encounter.
Use AskUserQuestion to present what you've captured and confirm:
"Here's what I have: Artifact: [type]. Reader: [description]. Encounter: [where/how they see this]. Anything to adjust?"
"Here's what your positioning canvas says. Which claims matter most for this specific reader? You can't use all of them — pick 1-3 that would make this reader stop and pay attention."
Present the claims inventory from the canvas, organized as:
What you're looking for:
Enforcement triggers:
When this step is done: 1-3 prioritized claims with reader-specific rationale for each.
Use AskUserQuestion to confirm:
"Selected claims for [reader]: [numbered list with rationale]. Ready to define constraints, or want to adjust?"
"What constraints does this artifact need to work within? Think about: length, tone, format requirements, what comes before and after it in the reader's experience."
Read the default structure and length for the selected artifact type from references/artifact-templates.md. Present the defaults:
"For a [artifact type], the defaults are: [structure from template]. Approximately [length] words. Want to adjust any of these?"
What you're looking for:
Enforcement triggers:
When this step is done: Constraint set documented — structure, length, tone, context.
This is NOT an interactive step. Draft the artifact based on Steps 1-3.
Check: Is this a short-form artifact?
Short-form artifacts are one-liners, outreach subject lines, or any artifact where the total output is under ~50 words. For these, use Variation Mode (see below). For all other artifacts, use Standard Mode.
Process:
references/artifact-templates.mdPresent the draft:
"Here's the first draft. Next I'll run the drift detector to check whether every claim traces back to your positioning canvas."
Show the full draft, then proceed immediately to Step 5. Do not ask for feedback yet — the drift detector runs first.
For short-form artifacts, a single draft is less useful than seeing multiple angles. Generate 5 variations, each taking a different approach to the same positioning claims.
Process:
references/artifact-templates.mdPresent the variations:
"Here are 5 variations. Each takes a different angle on your positioning. The likelihood rating estimates how well each would achieve [the goal from Step 1] for [the reader from Step 1]."
# Variation Angle Likelihood Why 1 "[text]" [which claim/frame it leads with] High [one-line rationale] 2 "[text]" [angle] Medium [rationale] 3 "[text]" [angle] High [rationale] 4 "[text]" [angle] Low [rationale] 5 "[text]" [angle] Medium [rationale]
Then ask:
"Pick one to go with, or combine elements from multiple. I'll run the drift detector on your choice."
The user selects one (or describes a hybrid), and that selection becomes the draft for Step 5. All 5 variations are saved in the output file for reference.
Likelihood rating criteria:
This step runs automatically after every draft. It is not optional and cannot be skipped.
"Running drift detection — checking every value claim in this draft against your positioning canvas."
Follow the full procedure in references/drift-detector-rules.md:
Present the report:
Drift Detection Report
Grounded claims (traced to canvas):
- "[claim]" — traces to: [canvas source]. Match: [direct/derived].
Ungrounded claims (generic insertions):
- "[claim]" — no match in canvas.
Missing claims (selected but absent from draft):
- "[claim]" — selected in Step 2 but not in draft.
Skip flag impacts (if any):
- [flag] — [impact on specific claims].
Summary: [N] grounded, [N] ungrounded, [N] missing, [N] skip-impacted.
Enforcement triggers:
When this step is done: Every claim in the draft is either grounded or explicitly acknowledged as a departure. All selected claims are either present or explicitly deprioritized.
"The draft is now grounded. Read it as [target reader from Step 1]. Does it make you want to [intended action]? What feels off?"
What you're looking for:
This step can loop: User gives feedback → skill revises → drift detector re-runs (Step 5) on the revision → review again. Each cycle produces a new drift report. The final report is what gets saved.
When this step is done: User approves the draft.
After user approval:
docs/copy-tests/{artifact-type}-{slug}-{date}.mdFrontmatter:
---
type: copy-test
artifact: [artifact-type]
tags: [artifact-type, target-reader keywords, relevant positioning tags]
target-reader: "[full reader description from Step 1]"
confidence: [ask the user: high, medium, or low]
created: [today's date YYYY-MM-DD]
source: "[session description]"
positioning-canvas: docs/positioning/current.md
positioning-canvas-date: [date from canvas frontmatter]
claims-used:
- "[claim 1 from Step 2]"
- "[claim 2 from Step 2]"
drift-flags: [number of ungrounded claims acknowledged as departures]
---
Body sections:
# [Artifact Type] — [Description]
## Target Reader
[Full description from Step 1]
## Claims Selected
[From Step 2: which claims were prioritized and why]
## Constraints
[From Step 3: length, tone, format, context]
## Draft
[The approved draft from Step 6]
## Variations (short-form only)
[If Variation Mode was used: all 5 variations with angles and likelihood ratings, plus which one was selected. Omit this section for standard-mode artifacts.]
## Drift Detection Report
[Final report from the last run of Step 5]
## Outcome Notes
[Empty. To be filled in after the copy is used in the real world.]
## Revision History
- [date]: Initial draft
Use AskUserQuestion:
Question: "Copy test saved to docs/copy-tests/[filename]. What next?"
Options:
/fw:copy again with the same canvas for a different reader or artifact type/fw:compound to capture what you learned about your messaging/fw:position to update the canvas (especially if drift detection revealed gaps)/fw:grow to test this copy in the real worldThe canvas is the source of truth. Every value claim in the draft must trace to the canvas. If it's not in the canvas, it doesn't belong in the draft unless the user explicitly acknowledges the departure.
Drift detection is not optional. It runs after every draft and every revision. The user cannot skip it. This is the enforcement that makes copy grounded instead of generic.
Push for a specific reader, not a segment. "Early-stage SaaS founders" is a segment. "A technical founder who just raised seed, evaluating three tools this week, saw your name on Twitter" is a reader. The reader drives the draft.
The draft should feel constrained. If the user feels like they can't say everything they want, the skill is working. Copy that tries to say everything says nothing.
Use the user's language from the canvas. Don't polish the canvas language into marketing speak. If the canvas says "janky spreadsheet workaround," the draft should echo that energy.
Outcome notes matter. Encourage the user to come back and record what happened when the copy was used. That's where compounding starts — future /fw:copy sessions can search prior tests to see what worked and what didn't.
One artifact per session. Each run produces one artifact for one reader. To cover multiple readers or artifact types, run /fw:copy multiple times. This prevents the draft from trying to please everyone.
When invoked with disable-model-invocation context:
--canvas <path> if provided; otherwise apply Canvas Path Resolution silently (single canvas: use it; multiple: use docs/positioning/current.md and flag the assumption; none: use docs/positioning/current.md)source: "pipeline mode — assumptions flagged"