From cascade-mcp
Produces scope analysis categorizing UI features from Figma frames, epic context, Confluence/Google Docs, and comments by user workflow and scope (☐/✅/⏬/❌/❓/💬). Drives questions and story writing.
npx claudepluginhub bitovi/cascade-mcp --plugin cascade-mcpThis skill uses the workspace's default tool permissions.
Produce a **Scope Analysis** — the central artifact that drives all downstream work. This takes per-frame analyses, epic context, reference documentation, and Figma comments, then categorizes every observed feature by scope and groups them by user workflow.
Generates frame-specific clarifying questions about ambiguous UI behaviors from Jira epics and linked Figma designs, Confluence pages, Google Docs, organized by Figma frame for feature reviews.
Transforms business analyses into epics, features, user stories, and tech-agnostic success criteria. Creates handoff documents for architects.
Share bugs, ideas, or general feedback.
Produce a Scope Analysis — the central artifact that drives all downstream work. This takes per-frame analyses, epic context, reference documentation, and Figma comments, then categorizes every observed feature by scope and groups them by user workflow.
This is the most critical step in the pipeline. Questions, shell stories, and story descriptions all consume the scope analysis as their primary input.
This is a sub-skill — called by parent skills after all content has been loaded, analyzed, and all Figma frames have been analyzed. Every parent skill (generate-questions, write-jira-story) needs scope analysis before proceeding.
.temp/cascade/figma/{fileKey}/frames/*/analysis.md.temp/cascade/context/*-summary.mdto-load.md)load-linked-resource-content)Read these files:
Epic context (PRIMARY source of truth for scope decisions):
.temp/cascade/context/jira-{epicKey}.md — the epic/story description## Scope Analysis section, extract it as the "previous scope analysis" for regenerationFigma frame analyses:
analysis.md files from .temp/cascade/figma/{fileKey}/frames/*/analysis.mdReference documentation:
*-summary.md files from .temp/cascade/context/ (Confluence, Google Docs summaries)Figma annotations (per-frame):
.temp/cascade/figma/{fileKey}/frames/*/context.md — comments, notes, and connections per frameThis step is conditional — only perform if the agent has access to a local codebase (e.g. workspace file access in VS Code Copilot, Claude Code, or similar). If no codebase is accessible, skip to step 3.
When codebase is available, use it as the primary source of truth for what is already implemented. This replaces the need for the epic description to enumerate existing vs. new features — the code tells you directly.
For each feature area identified from the Figma frame analyses:
VoteButton, thumbs-up, upvote)/comments/:id/vote, voteComment)upvoteCount, voteDirection)Use semantic search, grep, or file search as appropriate. Cast a wide net — look for partial matches and synonyms.
After searching, write a brief summary to .temp/cascade/codebase-check.md:
## Codebase Check Summary
- Searched: {what you searched for}
- Found implemented: {list of features with file references}
- Found partial: {list with notes}
- Not found: {list}
This summary is used to inform ✅ categorization and Developer Notes in the final story.
Every feature listed MUST reference actual UI elements or functionality explicitly described in frame analyses. Do NOT infer, assume, or speculate about features not shown in the screens. If a UI element is visible but its purpose/behavior is unclear, list it as ❓.
💬 {question} → {answer found in context}Epic context ALWAYS WINS for scope decisions:
Before marking any question as ❓, check ALL context sources:
If ANY source provides a clear answer → mark 💬, NOT ❓.
When a previous scope analysis exists (the epic already has a ## Scope Analysis section):
context.md, or new Confluence/Google Doc content) → flip to 💬 with the answerSave to .temp/cascade/scope-analysis.md:
.temp/cascade/
├── scope-analysis.md ← this file (THE key artifact)
├── context/ ← content summaries
│ ├── jira-PROJ-123.md
│ ├── jira-PROJ-123-summary.md
│ └── confluence-spec-summary.md
└── figma/
└── {fileKey}/
└── frames/
└── */
├── context.md
└── analysis.md ← frame analyses
Count the ❓ markers in the scope analysis. Report:
The parent skill decides what to do with this recommendation.
# Scope Analysis: {Feature/Epic Name}
## Feature Overview
{high-level description synthesized from all sources}
## User Journeys
### Journey 1: {Name}
1. {step referencing Frame Name}
2. {step}
## Feature Inventory
### {Workflow Area 1}
Screens: [Screen Name](figma-url), [Another Screen](figma-url)
- ☐ **{Feature}**: {description}
- ☐ **{Complex Feature}**: {detailed description with validation, error handling, etc.}
- ✅ **{Existing Feature}**: {brief description}
- ❓ **{Open Question}**: {what needs clarification, with enough context}
- 💬 **{Answered Question}**: {question} → {answer from context source}
### {Workflow Area 2}
Screens: [Screen Name](figma-url)
- ☐ **{Feature}**: {description}
- ⏬ **{Low Priority Feature}**: {description} (delay until end per epic)
- ❌ **{Excluded Feature}**: {brief description} (future epic)
### Remaining Questions
- ❓ **{Cross-cutting question}**: {description}
## Cross-Screen Patterns
- {shared components, consistent behaviors, design system usage}
## Technical Scope
- {APIs, data models, architecture implications}
## Implementation Notes
- {dependencies, constraints, decisions from documentation}