From flywheel
Save session learnings to knowledge stores so future sessions get smarter. Captures positioning insights, copy test outcomes, pitch storyboard results, pricing outcomes, and cross-session patterns. Use after any /fw:position, /fw:copy, /fw:pitch, /fw:monetize, or /fw:grow session, or when returning with real-world results. Supports multi-canvas portfolios via the --canvas flag.
npx claudepluginhub untangling-systems/flywheel --plugin flywheelThis skill uses the workspace's default tool permissions.
<compound_context> #$ARGUMENTS </compound_context>
Verifies tests pass on completed feature branch, presents options to merge locally, create GitHub PR, keep as-is or discard; executes choice and cleans up worktree.
Guides root cause investigation for bugs, test failures, unexpected behavior, performance issues, and build failures before proposing fixes.
Writes implementation plans from specs for multi-step tasks, mapping files and breaking into TDD bite-sized steps before coding.
<compound_context> #$ARGUMENTS </compound_context>
Capture what you learned so the next session starts smarter. Every positioning decision, copy test result, and cross-session insight gets saved where future /fw:position and /fw:copy runs can find it.
This skill writes learnings back to the positioning canvas (among other places). Path resolution order:
--canvas <path> in the user's arguments. Use that path.--canvas, scan docs/positioning/ for .md files (excluding portfolio.md and archive/). If exactly one exists, use it. If multiple exist, list them and ASK which canvas to append learnings to.docs/positioning/current.md.All references below to docs/positioning/current.md should substitute the resolved canvas path. Note: this skill also reads docs/positioning/archive/ — that path stays fixed; it's shared history across canvases.
/fw:position session — capture why you made specific positioning choices/fw:copy session — capture what you learned about your messagingCheck what happened recently by searching the knowledge stores:
Read the resolved canvas path (apply Canvas Path Resolution above) — check the created date in frontmatter. If created today or referenced in conversation, this is likely a post-positioning session.
Search docs/copy-tests/ — check for files created today or referenced in conversation. If found, this is likely a post-copy session.
Search docs/pitch-storyboards/ — check for files created today or referenced in conversation. If found, this is likely a post-pitch session.
Search docs/pricing/ — check current.md and any wtp-*.md files created or updated today. If found, this is likely a post-monetize session (or an outcome recording for real pricing data — conversion rates, churn by tier, upsell rates).
Search docs/growth-experiments/ — check for files created today or referenced in conversation. If found, this is likely a post-growth session.
Check arguments — if the user said "outcome" or described real-world results, this is an outcome recording session.
If context is ambiguous, ask:
"What are you compounding? Pick the session type:
- Positioning insights — why you made specific choices in the canvas
- Copy learnings — what you learned about messaging during a copy session
- Pitch storyboard learnings — what you learned about the narrative during a pitch session
- Pricing learnings — what you learned about WTP, tiers, model choice, or the pricing corridor
- Outcome recording — real-world results from copy, pitches, pricing, or experiments you've shipped
- Cross-session pattern — something you've noticed across multiple sessions"
Before capturing new learnings, search the relevant store for existing insights that might overlap or conflict.
For positioning insights: Read docs/positioning/current.md and any files in docs/positioning/archive/. Look for annotations or learnings sections.
For copy learnings: Read recent files in docs/copy-tests/. Check their Outcome Notes sections and Drift Detection Reports for patterns.
For cross-session patterns: Search across all three stores for files mentioning the same claims, segments, or attributes.
The compound skill has a lighter touch than /fw:position or /fw:copy. Three steps, not six. The friction here is in specificity — making the user articulate what's non-obvious — not in sequence enforcement.
The question depends on the session type.
After positioning:
"What did you learn about your positioning that wasn't obvious before this session? I'm not asking what you decided — that's in the canvas. I'm asking what surprised you, what was harder than expected, or what changed your thinking."
After copy:
"What did you learn about your messaging? Which claims were easy to write grounded copy for, and which kept drifting to generic language? What does the drift detector report tell you about your positioning?"
After shipping (outcome recording):
"What happened when you used this copy in the real world? Did it land? What response did you get? Be specific — 'it worked well' isn't a learning. 'The one-liner got 3 follow-up questions at the conference but no one clicked the landing page CTA' is."
Cross-session pattern:
"What pattern have you noticed? State it as a rule: 'When we [do X], [Y happens].' Then tell me what evidence supports it — which sessions or artifacts demonstrate the pattern?"
What you're looking for:
Enforcement triggers:
Search the knowledge stores for prior learnings or decisions that the new learning might contradict or supersede.
Conflict detection:
current.md?If a conflict is found:
"This new learning conflicts with a prior decision:
Prior: [quote the prior decision with its source file and date] New: [state the new learning]
Options:
- Update the prior — the new learning supersedes it. I'll update [file] and note the change.
- Archive the prior — keep the old decision as history, replace with the new learning.
- Keep both — the prior was right in its context, the new learning applies to a different context. Note the distinction.
- Discard the new — on reflection, the prior decision still holds."
Use AskUserQuestion to get the user's decision. Do not resolve conflicts automatically.
If no conflict: Proceed to Step 3.
Where and how the learning gets saved depends on the session type.
Add a ## Learnings section to docs/positioning/current.md (if it doesn't already have one), or append to the existing learnings section.
Format:
## Learnings
### [Date] — [One-line summary]
**Insight:** [The non-obvious learning]
**Evidence:** [What happened that revealed this]
**Implication:** [How this should change future sessions]
If the learning suggests the canvas itself should change, note that explicitly:
"This learning suggests updating [section] of the canvas. Want to run
/fw:positionto revise, or note it as a future revision?"
Two possible targets:
If about a specific copy test: Update the ## Outcome Notes section of the relevant file in docs/copy-tests/.
Format:
## Outcome Notes
- [Date]: [What happened when this copy was used]
- Result: [Specific outcome — clicks, responses, conversions, qualitative feedback]
- Learning: [What this tells us about the positioning claims used]
- Next action: [What to do differently next time]
If a general messaging insight: Add to the most recent copy test's Outcome Notes with a note that it's a cross-artifact insight, or append to the positioning canvas's Learnings section if it's really about positioning, not copy.
Create a new file in docs/positioning/ (not current.md, not archive/):
Filename: docs/positioning/pattern-{slug}-{date}.md
---
type: positioning-pattern
tags: [relevant tags]
confidence: [ask user: high, medium, low]
created: YYYY-MM-DD
source: [session description]
evidence-from:
- [list of files/sessions that support this pattern]
---
# Pattern — [Title]
## The Pattern
[State as a rule: "When we [do X], [Y happens]."]
## Evidence
[Which sessions, artifacts, or real-world results support this]
## Implication
[How future /fw:position and /fw:copy sessions should account for this]
Update the specific artifact in docs/copy-tests/, docs/pitch-storyboards/, or docs/growth-experiments/.
For copy tests: Update the ## Outcome Notes section with the recorded results.
For pitch storyboards: Add or update an ## Outcome Notes section at the bottom of the storyboard file. Record:
Also update the frontmatter: set last-updated to today's date.
For pricing decisions: Update docs/pricing/current.md directly. Pricing outcomes come in several forms:
## Outcome Notes section with specific numbers.Also update the frontmatter: set last-updated to today's date and consider bumping confidence up or down based on the outcome.
For WTP interview notes: Update the specific docs/pricing/wtp-{slug}-{date}.md file. Fill in the "Interview Notes" section with the captured numbers, direct quotes, and surprise signals. Set frontmatter status: completed. Then run /fw:monetize revise [section] on the sections the interview materially changed.
For growth experiments: Update both:
## Result and ## Learnings body sections with the recorded resultsstatus: completed and populate result: with a brief summary of the outcomeThis frontmatter update is critical — the growth-researcher agent uses status and result fields to classify experiments. Without it, completed experiments will keep appearing as pending in future /fw:grow sessions.
If the user doesn't specify which artifact, list recent ones:
"Which artifact are you recording outcomes for? [list recent files with dates and types]"
Use AskUserQuestion:
Question: "Learning saved to [file]. What next?"
Options:
/fw:position to revise positioning based on what you learned/fw:copy to test the updated messaging/fw:compound againLearnings must be non-obvious. If it's already captured in the canvas or copy test, it doesn't need to be saved again. The compound skill captures what the other skills DON'T — the meta-insights about the process and patterns.
Specificity over volume. One sharp insight ("the 'janky spreadsheet' framing gets more engagement than the polished version") beats five vague ones ("we learned a lot about our messaging").
Conflicts are valuable. When a new learning contradicts a prior decision, that's signal — it means the positioning is evolving. Surface the conflict explicitly. Don't quietly overwrite.
Outcome notes are gold. The most valuable learnings come from real-world use. Encourage users to come back after shipping copy and record what happened. This is where the compounding loop closes — future /fw:copy sessions can reference what actually worked.
Don't over-save. Not every session produces a learning worth saving. If the user did a straightforward positioning session and the canvas captures everything, it's fine to say "the canvas already captures this — nothing extra to compound."
Cross-session patterns need evidence. A pattern based on one session is a hypothesis. A pattern based on three sessions is worth saving. Ask for the evidence.
When invoked with disable-model-invocation context:
--canvas <path> if provided; otherwise apply Canvas Path Resolution silently (single canvas: use it; multiple: use docs/positioning/current.md and flag the assumption; none: use docs/positioning/current.md)source: "pipeline mode — review recommended"