Shape a strategic bet and decompose it into spec-able stories — multi-multi-dimensional value per story, dependency graphs, cross-cutting concerns, phasing with rationale. Use when: decomposing a bet into stories, structuring a product direction into workable pieces, mapping dependencies across workstreams, preparing stories for /stories or /spec.
From sharednpx claudepluginhub inkeep/team-skills --plugin sharedThis skill uses the workspace's default tool permissions.
references/phasing-heuristics.mdreferences/quality-examples.mdGuides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Optimizes cloud costs on AWS, Azure, GCP via rightsizing, tagging strategies, reserved instances, spot usage, and spending analysis. Use for expense reduction and governance.
Shape a strategic bet and decompose it into stories within a PROJECT.md. Each story carries multi-multi-dimensional value with intersection reasoning, dependencies are surfaced, connections are mapped, and phasing has evidence-based rationale.
The workflow follows three phases: identify outcomes (what's true "when we're done"), refine stories (deep dive per outcome with product+technical intertwined), and synthesize across stories (delivery groupings, phasing, validation). The bet gets refined THROUGH decomposition — refining what the bet means and breaking it into stories are one conversation, not two sequential steps.
Load (on entry): Load /structured-thinking skill. If unavailable (Skill tool returns error), stop and inform the user: "The /projects skill requires /structured-thinking for shared vocabulary (SCR format, disambiguation protocol, value dimensions, decision taxonomy). Cannot proceed without it."
After loading, find the skill's reference files (use Glob for **/structured-thinking/references/*.md). Read:
references/challenge-posture.md (co-driver stance, anti-sycophancy, investigate-vs-judgment boundary)references/disambiguation-protocol.md (the 5-step protocol: challenge/probe/surface/explore/verify)references/extraction-protocol.md (three probes, Items table schema + lifecycle, carry-forward discipline)references/session-discipline.md (investigation escalation ladder, multi-answer parsing, progress scorecard, interaction cadence)Before starting any work, create a task for each phase using TaskCreate with addBlockedBy to enforce ordering.
Mark each task in_progress when starting and completed when done. On re-entry, check TaskList first and resume from the first non-completed task.
If input is rich (structured bet file with SCR, constraints, multi-dimensional value), Phase 1 compresses to validation rather than full grounding.
Create the artifact infrastructure before any substantive work. This ensures progressive writing has a home from the first finding.
<projects-dir>/<project-name>/PROJECT.md with section headers from the output template (empty — populated progressively)evidence/ directorymeta/_changelog.md with initial entry: date, bet description, session startWhere to save:
| Priority | Source |
|---|---|
| 1 | User says so in the current session |
| 2 | Env var CLAUDE_PROJECTS_DIR (check for resolved-projects-dir in the SessionStart hook output at the top of your conversation context; if not present, fall back to priority 3-5) |
| 3 | AI repo config (CLAUDE.md, AGENTS.md, etc.) declares projects-dir: |
| 4 | Default (in a repo): <repo-root>/projects/<project-name>/PROJECT.md |
| 5 | Default (no repo): ~/.claude/projects/<project-name>/PROJECT.md |
Directory uses kebab-case semantic naming (e.g., projects/dx-growth-loop/).
Accept the bet — from a bet file, Google Doc, or verbal direction. Build the world model, then enumerate outcomes that pass the quality gate.
ONE THOUGHT RULE: Capture the user's initial decomposition thinking BEFORE the AI proposes changes. The user's first articulation, before AI contamination, often contains unique insight. Mirror it back: "So you're thinking [restate]. Let me challenge that after I build some context."
Triage the input:
Read from /structured-thinking: references/problem-framing.md (SCR format, 5-probe stress test at bet level).
Dispatch /worldmodel (full depth) as a subagent: spawn a general-purpose subagent via the Agent tool. Include --depth full in the prompt text:
"Before doing anything, load /worldmodel skill. Run with --depth full on [topic]. [Include bet description and any user-provided context.]"
The AI needs its own grounding to challenge assumptions and ask informed questions — regardless of the user's expertise. Worldmodel runs as a parallel subagent while you capture the user's initial thinking.
If /worldmodel is unavailable: Fall back to direct investigation — Read/Grep/Glob for codebase, WebSearch for web context, read the reports catalogue manually. Note: "automated grounding not performed — manual investigation used."
Begin progressive writing: Read from /structured-thinking: references/artifact-discipline.md (progressive writing, evidence conventions). From this phase onward, write to PROJECT.md incrementally. At any point, the user can stop and PROJECT.md is in a valid (if incomplete) state.
Write strategic context to PROJECT.md.
Read from /structured-thinking: references/value-dimensions.md (dimension-trace diagnostic, intersection reasoning, value connections).
Map workstreams and dependencies. Surface the M:N relationship between the bet and potential workstreams. Identify cross-cutting dependencies — auth, infrastructure, shared APIs that thread through multiple workstreams. These are not stories; they're constraints that affect multiple stories.
For each workstream, probe multi-multi-dimensional value: Run the dimension-trace diagnostic — does this workstream trace to at least one value dimension (customer, platform, GTM, internal)? Are the intersection constraints visible? Probe across all four dimensions, but only include dimensions that genuinely apply.
Build the dependency graph: What depends on what? What's cross-cutting? What's truly independent?
Challenge with the "average" warning: "This is what a typical decomposition would look like — what do you know about your product/market/team that makes a different cut better?"
Investigate gaps autonomously. When the user can't provide multi-dimensional value or dependency information, check the codebase, existing reports, and web before accepting the gap. Only flag as an assumption after investigation fails. Mark agent-inferred content with provenance: "Inferred from [source] — verify with [owner]."
For substantial gaps in strategic rationale or dimensional reasoning, dispatch /analyze as a subagent (Pattern C). Include the worldmodel output in the prompt and tell it to skip its own worldmodel phase — subagents can't nest further subagents, so /analyze can't dispatch /worldmodel itself. For external evidence gaps, dispatch /research with --headless in the prompt (research's scoping gate needs auto-confirmation since no human is present in the subagent). For deep codebase tracing, dispatch /explore. If unavailable (Skill tool returns error), skip and document: "deep codebase tracing not performed."
Write cross-cutting concerns section to PROJECT.md.
This is the Phase 1 deliverable: a set of outcomes that pass the quality gate. This step is methodical, not fast — wide bets with many beneficiary groups need thorough enumeration. Cross-horizontal pattern-finding happens here.
For each workstream, identify the outcomes — what's true "when we're done."
Run systematic extraction: Apply the three probes from extraction-protocol.md at bet level:
Capture items in the Items table as they surface. Follow the load-bearing heuristic: track formally when the item creates precedent, is customer-facing, is foundational tech, is a one-way door, is cross-cutting, or creates divergence. Resolve implementation details in conversation.
P0/P2 triage: Every item is either P0 (must resolve in this project) or P2 (explicitly deferred with context). No P1. If uncertain, default to P0. Present triage to user: "Here's what I think is P0 vs P2. Adjust?"
Before proceeding to story refinement, every outcome must pass the "when we're done" test:
Examples:
The gate is "landscape-complete" (all major beneficiary groups covered, cross-cutting patterns visible), not "exhaustive" (every possible outcome enumerated). Phase 2 refinement may surface new outcomes that feed back to Phase 1 via the existing upward-cascade mechanism.
Phase 1 output: Validated bet framing, outcomes passing the quality gate, initial Items table, cross-cutting concerns, dependency graph.
Deep dive per outcome through the fractal loop. Product and technical details are intertwined — this is where exact product details get worked through alongside technical constraints.
Read from /structured-thinking: references/decision-taxonomy.md (temporal non-goals, confidence vocabulary, resolution statuses).
Read references/quality-examples.md from this skill's directory for incorrect/correct pairs. Use these to calibrate decomposition quality.
Decompose each outcome into stories through the fractal loop. At each story level:
The Items table, evidence files, and changelog are updated continuously as stories are refined — not deferred to the end.
extraction-protocol.md: list without filtering during extraction, prioritize after.extraction-protocol.md §8).artifact-discipline.md. Use the baseline format (evidence/<filename>.md) after claims derived from investigation. For cross-artifact evidence (e.g., a /research report), use (reports/<name>/REPORT.md).meta/_changelog.md.Follow the session discipline from session-discipline.md:
A story is the right size when:
Story-level non-goals with temporal tags, falsifiable invariants, and assumptions with verification plans are optional enrichment — a downstream sharpening skill elicits these. Bet-level non-goals (in the Strategic context section of PROJECT.md) are always included. Include them when they surface naturally during decomposition; don't exhaustively probe for them on every story.
Deepen on request: When the user flags a story as critical ("this is the auth foundation everything depends on — let's go deeper"), apply extra completeness criteria from the /structured-thinking references already loaded: push for falsifiable invariants (decision-taxonomy.md), probe temporal non-goals, draft acceptance criteria, surface assumptions with confidence + verification plans. This is user-initiated, not default.
Scope coherence: When a story fails the 2-3 sentence test, split it. When a workstream proves to be one story, merge up.
Phase 2 output: Stories with multi-multi-dimensional value, connections, and constraints at project-grade quality. Items table populated with all items surfaced during refinement. Each story appended to PROJECT.md as it's decomposed.
Derive delivery groupings, phasing, and validation from the refined stories.
Before sequencing, verify the decomposition is complete:
Every P0 item in the Items table must be resolved (Decided, Parked with context, or Assumed with confidence + verification plan). If P0 items remain Open or Exploring, return to Phase 2 to resolve them before phasing.
Identify which stories must ship together — shared infrastructure, sequential dependencies, or coherent user experiences that can't be split across releases.
Default layering (calibrated for ~10 engineers, 2-4 barrels — adjust if team shape changes significantly): Start with capacity-first, then layer risk-first (de-risk uncertain work early), then dependency-first (unblock parallel work). Override when the dominant constraint clearly dictates otherwise.
| Override condition | Use instead |
|---|---|
| Technical unknowns dominate | Risk-first (riskiest assumption test: which assumption, if wrong, kills the project?) |
| Time is the hard constraint | Appetite-first (fixed time, variable scope) |
| Unvalidated market/user journey | Customer-journey-first (thinnest end-to-end slice for feedback) |
| Business pressure for quick wins | Value-first (highest-value stories first) |
Read references/phasing-heuristics.md for the full framework (6 heuristics, validation tests, research context).
Now: Unblocks other work, resolves highest uncertainty, or delivers highest value. Dependencies from Later→Now are allowed; Now→Later is not. Next: Depends on Now completing, or high-value but not the dominant constraint. Later: Valuable but can wait. Each with a trigger to promote.
Name the heuristic and the evidence for each phasing decision — not just "this feels like Now."
Identify attractive nuisances — things that look like they should be in scope but would derail the project. For each: why it's tempting, why it's a rabbit hole, what to do if encountered during implementation.
If this project fails, what's the most likely cause? What are we assuming that could be wrong?
Simulate — can someone take each story to a sharpening process without calling you back? If they'd need to ask "what's the platform dimension?", "what depends on this?", or "why is this Now and not Later?" — the decomposition isn't done.
Finalize PROJECT.md — reorder stories into Now/Next/Later, add phasing rationale, rabbit holes, pre-mortem. Log completion in meta/_changelog.md.
When invoked with bare direction (no bet file, no Google Doc), expand Phase 1 grounding:
/worldmodel dispatch (full depth) happens in Phase 1 regardless of input type — no additional dispatch is needed for standalone mode.
No headless mode. This skill requires interactive human input (strategy interrogation, challenge steps, fractal loop probing). Defer headless support to a future version if orchestrator invocation is needed.
# Project: [verb-first title]
**Last verified:** YYYY-MM-DD <!-- date this PROJECT.md was last verified as current -->
**Traces to:** [bet file or strategic direction]
**Appetite:** [if bounded — from bet or user-specified]
## Strategic context
[Why this bet. SCR at bet level. Multi-multi-dimensional value of the overall bet.
Claims from investigation include evidence references (evidence/<topic>.md).
What we're NOT doing (bet-level non-goals with temporal tags).]
## Items
| ID | Item | Type | Priority | Status | Notes |
|---|---|---|---|---|---|
| PQ1 | ... | Product | P0 | Decided | Decision + rationale (evidence/auth-patterns.md) |
| TQ1 | ... | Technical | P0 | Exploring | What's been found so far (evidence/api-surface.md) |
| XQ1 | ... | Cross-cutting | P2 | Parked | Options + why not now + trigger |
## Cross-cutting concerns
[Dependencies that thread through multiple stories — not stories themselves,
but infrastructure, patterns, or constraints that affect multiple stories.
Each with: what it is, which stories it touches, how it constrains them.]
## Stories
### Now
[Phasing rationale: why these are Now — name the heuristic and evidence.]
#### [Verb-first story title]
[What to build — 1-3 sentences.]
**Value:** [Multi-dimensional articulation with intersection reasoning.
"This enables X (customer) AND establishes the pattern for Y (platform)
BUT must be done before Z (constraint)."]
**Constraints:** [What bounds the solution space]
**Lateral:** [What siblings depend on or share with this]
**Forward:** [What future work this enables]
#### [Next story...]
### Next
[Phasing rationale: why Next, not Now.]
[Stories in same format...]
### Later
[Phasing rationale: why Later. Each with a trigger to promote.]
[Stories in same format...]
## Rabbit holes
[Attractive nuisances. Each with: why tempting, why a rabbit hole,
what to do if encountered during implementation.]
## Pre-mortem
[If this project fails, what's the most likely cause?
What are we assuming that could be wrong?]
## Evidence & References
### Evidence Files
- [evidence/<file>.md](evidence/<file>.md) — [one-line: what it contains]
### Research Reports
- [reports/<name>/REPORT.md](reports/<name>/REPORT.md) — [what it covers]
### Code Repositories
- [org/repo](URL) — [what was examined]
### External Sources
- [Title](URL) — [brief description]
### Upstream Artifacts
- [<bet file or strategic direction>](<path>) — source bet
| Anti-pattern | What it looks like | Correction |
|---|---|---|
| Technical-layer decomposition | Stories map 1:1 to infrastructure layers ("define type system," "enable JWT plugin," "design middleware handler") instead of user outcomes | Reframe: "When we're done, [who] can [what]?" Each story names a beneficiary + observable change. Technical layers surface as cross-cutting concerns or Phase 3 delivery groupings. See quality-examples.md. |
| Separate tracking tables | Creating separate Open Questions and Decision Log and Assumptions tables instead of using the unified Items table | One Items table. Status column distinguishes item types: Open/Exploring/Blocked for questions under investigation, Decided for resolved decisions, Assumed for temporary scaffolding. |
| Proposing changes before capturing user's thinking | AI immediately suggests a decomposition | ONE THOUGHT RULE: capture the user's first articulation, mirror it back, THEN challenge. |
| Accepting claims without verification | "The auth layer supports this" → proceed | Check the codebase. Worldmodel grounding is for this. |
| Accepting "I don't know" without investigation | User can't provide multi-dimensional value → flag as assumption | Investigate first: codebase, reports, web. Dispatch /analyze for substantial gaps. Only flag after investigation fails. |
| Dimension lists without intersection reasoning | "Customer: SDK improvements. Platform: API patterns." | Connect them: "SDK improvements (customer) AND the API pattern they establish (platform) — the pattern is load-bearing because the marketplace story needs it." |
| Hidden cross-cutting dependencies | Auth threads through 3 stories but isn't surfaced | Phase 1 explicitly maps dependencies. If you discover one during Phase 2, escalate — don't bury it in a story's constraints. |
| Accepting the "average" decomposition | Typical workstream breakdown without challenging whether it fits THIS product/team | Ask: "This is what most teams would do — what do YOU know that makes a different cut better?" |
| Phasing by gut feel | "This feels like Now" without evidence | Name the heuristic and the evidence. "This is Now because it unblocks 3 other stories (dependency-first) and resolves the highest-risk assumption (risk-first)." |
| Cascade thrashing | Phase 2 keeps revising Phase 1 indefinitely | Cascade budget: max 2 bet-level reframes. After that, remaining issues → pre-mortem items. |
| Exhaustively sharpening every story | 5+ minutes per story probing invariants, non-goals, AC | Project-grade is the default: value + constraints + connections. Deepen only on user request. |
| Attempting technical architecture | Proposing API shapes, data models, or system design | Stop. This skill captures the problem space. A specification process investigates solutions. |
| Losing work to session interruption | 90 minutes of decomposition, no artifact written | Progressive writing from Phase 1. PROJECT.md grows during the session. The user can stop at any point. |
| Items table bloat | 40+ items where most are implementation details | Apply the load-bearing heuristic: track formally only when the item creates precedent, is customer-facing, is foundational tech, is a one-way door, is cross-cutting, or creates divergence. Resolve everything else in conversation. |