From hope
Resolves technical HOW decisions like architecture choices, technology selection, and design patterns from clear specs. Provides implementation roadmaps and trade-off analysis before coding.
npx claudepluginhub saadshahd/moo.md --plugin hopeThis skill uses the workspace's default tool permissions.
Decide HOW before building. Shape works dimension by dimension — each aspect that needs a HOW decision gets expert-informed choices the user selects from.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Decide HOW before building. Shape works dimension by dimension — each aspect that needs a HOW decision gets expert-informed choices the user selects from.
For tasks with 1 dimension and an obvious approach, collapse to Step 1 → Step 5.
Parse the intent brief (or user's spec). Identify:
Prior art gate — Do not proceed to Step 2 until this is done with tool calls:
Reasoning about what probably exists is not prior art search. Document findings as 5-10 brief bullets. Empty findings ("nothing found") must be stated explicitly.
Dichotomy of control — Ask via AskUserQuestion: what's actually within the user's control? Scope the shaping to match their timeline and accountability. Work that depends on other teams, external approvals, or infrastructure they can't change should be documented as externalities, not shaped as if the user can decide them.
Run the entry audit internally — do not output to user:
Only surface failures via AskUserQuestion. If all checks pass, proceed silently. A spec that can't be verified isn't a spec.
Present findings and proposed dimensions as brief scannable bullets.
Infer the domain type — high-stakes (security, financial, compliance) or exploratory (prototypes, spikes). State your classification as a brief line. High-stakes domains get deeper analysis per dimension. Exploratory domains get lighter treatment. Only ask via AskUserQuestion if genuinely ambiguous.
Present dimensions to the user via AskUserQuestion (multiSelect): "Which aspects matter for this task?"
Not every task needs every dimension. The user decides.
Infer reversibility per scoped dimension: Type 2 (reversible — move fast) or Type 1 (irreversible — deep analysis). State classifications as a brief line. Only ask if ambiguous. This sets the depth of shaping per dimension.
For each scoped dimension:
Consult experts — Invoke the consult skill via the Skill tool for expert perspectives on this dimension. Frame the prompt so consult produces distinct approaches, not a single recommendation:
Skill: hope:consult, args: "Panel: For [dimension] in [context], what are 2-3 distinct approaches? Surface where experts disagree — I need choices with tradeoffs, not consensus."
Take consult's output and reshape it into AskUserQuestion choices (see step 3 below). State who was consulted in a brief line.
Reason through — Apply relevant techniques internally based on domain type and dimension. These are your reasoning toolkit — use them, present the insights they produce in plain language:
Present choices — Use AskUserQuestion with detail panels. Each option:
Batch non-conflicting dimensions into a single AskUserQuestion. Separate dimensions where one choice constrains another.
If "Explore further," regenerate choices with deeper analysis. If user reframes the dimension itself, restart expert selection. User can add new dimensions mid-loop if experts surface something unconsidered.
After all dimensions are resolved:
Cross-dimension tension check — Two good individual choices can conflict when combined.
Pre-mortem — Distinct from per-dimension failure analysis. Imagine the shaped solution shipped and failed six months later. Why? This catches adoption, timing, political, and integration failures that per-dimension analysis misses. Consider at minimum: will users discover this feature? Can it be used without a mouse? What happens during concurrent operations (e.g., user interacts while system is streaming)? What's the maintenance burden when adjacent code changes?
Second-order effects — For each dimension choice, explicitly ask: "what does this prevent us from doing in 6-12 months?" List the closed-off futures as visible text. This analysis is not optional — every choice closes doors. The question is whether those doors matter.
Present tensions, pre-mortem risks, and second-order effects as one AskUserQuestion with detail panels showing each scenario. Include:
If after genuine analysis there are truly no tensions, no risks, and no closed-off futures worth surfacing — state that briefly and proceed to Step 5.
Run the exit audit internally — do not output to user:
Only surface failures. If all checks pass, proceed silently.
Compress the shape to one sentence explaining why it exists — the semantic checksum. If you can't, the shape isn't clear yet. Revise before emitting.
Produce the shape document:
Shape surfaces expert-informed choices; user owns architecture. Expert options are advice, not prescriptions. When uncertain about risk, bias toward more human involvement. When interpretations diverge, present options — never pick for the user.