From cape
Use BEFORE writing any code for new features, integrations, or system changes. Triggers when the user describes something to build, asks "how should I approach X", is unsure between approaches, or mentions adding/creating/building functionality. Also use when requirements are vague, architecture is unclear, or the task involves design decisions (e.g., choosing libraries, data models, API patterns). Do NOT use for bug fixes, refactoring where the target structure is clear (use cape:refactor), executing existing plans, or tasks where the implementation path is already clear. This skill researches the codebase, asks Socratic questions, generates competing designs under different constraints, and produces a design summary for `cape:write-plan` to formalize into a br epic.
npx claudepluginhub sqve/cape --plugin capeThis skill uses the workspace's default tool permissions.
<skill_overview> Turn rough ideas into validated designs ready for `cape:write-plan` to formalize
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
<skill_overview> Turn rough ideas into validated designs ready for cape:write-plan to formalize
into a br epic. Research the codebase, ask Socratic questions, generate competing designs under
different constraints, and produce a self-contained design summary.
Core contract: no design gets locked without research, constraint-driven design exploration, and iterative user discussion at every stage. </skill_overview>
<rigidity_level> LOW FREEDOM — Adapt questioning style and research depth to context, but always: research before proposing, checkpoint after each step, never advance without user input. </rigidity_level>
CONVERSATIONAL — Brainstorm is a discussion, not a plan artifact. Never enter plan mode. If
plan mode is active when brainstorm is invoked, exit it immediately and proceed conversationally.
The design summary lives in conversation context; write-plan formalizes it into a br epic later.
<when_to_use>
Don't use for:
cape:refactor)<critical_rules>
cape:challenge after approach selection, don't load
automatically</critical_rules>
<the_process>
Every step ends with a CHECKPOINT — present findings and wait for user input. Never advance to the next step until the user responds. The user may discuss, redirect, ask follow-ups, or say "continue" to proceed. This is a conversation, not a pipeline.
Check for ready work first:
Run br ready before doing anything else. If it returns tasks:
cape:execute-plan with the Skill tool and stopSkip this step only if br ready returns no tasks.
Gather context:
cape git context for recent commits and codebase state; check existing docs and structurecape:codebase-investigator to find existing patterns relevant to the ideacape:internet-researcher if the idea involves external APIs, libraries, or unfamiliar
techAnswer your own questions first:
Before asking the user anything, check if the codebase or research can answer the question. Explore code, read docs, check patterns. Only ask the user questions that require human judgment (priorities, preferences, business constraints). If you can answer it by reading code, read code.
Ask clarifying questions:
Use AskUserQuestion for structured choices (token storage, auth strategy, data model decisions). Use conversational follow-ups for open exploration (what problem are you solving, who are the users, what does success look like).
Guidelines:
Record decisions as you go:
Maintain a running "Key Decisions" table throughout the conversation:
| Question | Answer | Implication |
|---|---|---|
| [What you asked] | [What user said] | [How it shapes requirements/anti-patterns] |
This table feeds directly into the design summary.
Present what you found — do not propose solutions yet:
## Research summary
**Codebase:** [existing patterns, relevant files, constraints discovered]
**External:** [API docs, library capabilities — if researched]
**Dead ends:** [what you explored, what you found, why it's not relevant]
**Key decisions so far:** [table of user answers from clarifying questions]
STOP here. Ask: "Anything to discuss or redirect before I propose approaches?"
The user may:
Do NOT proceed to Step 2 until the user responds.
Generate competing designs:
Assess whether the idea warrants divergent exploration or has an obvious path:
cape:design-an-interface with the Skill tool instead of running divergent
mode inline. Its comparison and recommendation feed back as the chosen approach and approaches
considered.Divergent mode — dispatch 3 parallel sub-agents:
Each agent receives the same research context (codebase findings, external docs, Key Decisions so far) and designs under a different constraint:
| Agent | Constraint | Tendency |
|---|---|---|
| 1 | Minimize the interface — simplest possible | Fewest moving parts, smallest API surface |
| 2 | Maximize flexibility — support many use cases | Extension points, configuration, loose coupling |
| 3 | Optimize for the most common case | Fast path for the 80% case, pragmatic trade-offs |
If agents aren't available, simulate the constraints yourself: design each approach sequentially under the stated constraint.
Inline mode — propose directly:
For simple ideas with an obvious path, skip agents and propose 1-2 approaches inline with pros/cons.
Present approaches side by side — do not pick one yet:
Three designs explored under different constraints:
1. **[Minimal]** (simplest interface)
- Approach: [description]
- Pros / Cons / Trade-off
2. **[Flexible]** (maximum flexibility)
- Approach: [description]
- Pros / Cons / Trade-off
3. **[Pragmatic]** (common case optimized)
- Approach: [description]
- Pros / Cons / Trade-off
I recommend option [N] because [specific reason, especially codebase consistency].
The other designs revealed [insight the recommended approach should absorb].
STOP here. The comparison is the discussion artifact. Let the user react.
The user may:
Iterate until the user signals satisfaction with a direction. Only then proceed to Step 3.
After the approach is selected, offer challenge:
"Want me to load cape:challenge to stress-test this design for hidden assumptions, or skip
straight to the design summary?"
If the user wants challenge:
cape:challenge with the Skill tool to surface hidden assumptionsIf the user skips, proceed directly to Step 4.
Compose the design summary internally (do not present yet). This summary must be self-contained —
cape:write-plan should be able to create the epic without re-asking brainstorm's questions.
## Design summary
**Problem:** [1-2 sentences]
**Chosen approach:** [Name + rationale]
**Requirements:** [Bullet list derived from decisions]
**Anti-patterns:** [Bullet list with "NO X (reason: Y)" format]
**Architecture:** [Components, data flow, integration points]
**Scope:** In: [inclusions] / Out: [exclusions]
**Open questions:** [Uncertainties for implementation]
### Key decisions
| Question | Answer | Implication |
|----------|--------|-------------|
### Research findings
**Codebase:** [file paths, patterns]
**External:** [APIs, libraries, docs]
### Approaches considered
1. **[Chosen]** (selected) — [why]
2. **[Rejected]** — [why rejected, DO NOT REVISIT UNLESS]
### Dead ends
[What explored, what found, why abandoned]
Fact-check before presenting:
Dispatch cape:fact-checker on the composed design summary. Pass all factual claims from the
Requirements, Architecture, and Research findings sections. The fact-checker verifies each claim
against codebase evidence (file:line) and external sources (URL — Tier N).
Present the final design summary only after fact-checking is complete.
Stop and hand off:
Design summary complete (fact-checked). Next step: formalize into a br epic with `cape:write-plan`.
</the_process>
<agent_references>
Each sub-agent receives the same research context and designs under its assigned constraint:
If sub-agents aren't available, simulate constraints sequentially.
cape:fact-checker protocol (Step 4):Dispatch after composing the design summary, before presenting to the user. Pass all factual claims from the Requirements, Architecture, and Research findings sections. Expect back per-claim verdicts (Confirmed/Refuted/Partially correct/Unverifiable) with evidence. Remove or correct claims based on the verdicts.
</agent_references>
<skill_references>
cape:design-an-interface with the Skill tool when (interface mode):Its recommendation feeds into the design summary as the chosen approach.
cape:challenge with the Skill tool when (opt-in):Challenge walks each assumption interactively — confirmed assumptions become requirements or anti-patterns in the design summary. Rejected ones trigger scope reductions.
</skill_references>
Brainstorm rushes through without stopping for discussionUser: "Add template support to our Tiptap editor. We have a POC."
Wrong: Research POC + editor → ask intake questions → propose full implementation plan → present design summary. The user never gets to discuss research findings or debate approaches — only answer data-gathering questions. By the time they see the design, all decisions are made.
Right:
cape:write-plan as next step User: "Add OAuth authentication"
Wrong: "I'll implement OAuth with Auth0..." — proposes approach without checking that passport.js already exists in the codebase. Creates inconsistent architecture.
Right:
User: "Build a plugin system for our CLI tool"
Wrong: Propose a single approach without exploring constraints.
Right: Research → CHECKPOINT (3 existing plugins found) → user confirms scope → divergent mode (3 agents) → CHECKPOINT (compare, recommend pragmatic) → user picks hybrid → challenge surfaces plugin discovery assumption → design summary with anti-pattern "NO DI framework (reason: 3 plugins don't justify it)".
Anti-patterns prevent implementation shortcutsWrong: Epic says "Tokens stored securely" with no anti-patterns. During implementation, hits complexity → stores tokens in localStorage. No guardrail prevented it.
Right: Design summary says "Tokens stored in httpOnly cookies" with anti-pattern "NO
localStorage tokens (reason: httpOnly prevents XSS token theft)". When write-plan formalizes this
into an epic, the anti-pattern is preserved and blocks shortcuts during implementation.
<key_principles>