From cascade-mcp
Fetches raw content from Jira issues, Confluence pages, Google Docs, and Sheets via MCP tool. Saves markdown files locally and appends discovered links to to-load.md for iterative loading.
npx claudepluginhub bitovi/cascade-mcp --plugin cascade-mcpThis skill uses the workspace's default tool permissions.
Fetch raw content for a set of URLs using Cascade MCP tools. Save content locally and discover new links for iterative loading.
Summarizes and categorizes content from Jira issues, Confluence pages, and Google Docs in .temp/cascade/context/. Extracts key details like requirements, links, criteria; discovers new URLs and writes summary files.
Internal fetcher module for Confluence pages. Fetches content via Atlassian MCP (preferred), REST API with Basic Auth (fallback), or browser DOM extraction via Claude in Chrome (last resort) and returns Markdown. Used by /bedrock:learn and /bedrock:sync — not intended for direct user invocation.
Gathers content from URLs (auto-detects Google/Slack/Notion/GitHub), web searches (Tavily/Exa), and local codebase into markdown artifacts for stable reasoning context.
Share bugs, ideas, or general feedback.
Fetch raw content for a set of URLs using Cascade MCP tools. Save content locally and discover new links for iterative loading.
This is a sub-skill — called by parent skills (generate-questions, write-jira-story), not directly by users. Use when the parent skill needs to gather raw content from one or more URLs before analysis.
extract-linked-resources output or from to-load.md)Check if .temp/cascade/context/to-load.md exists. If it does, read it to find URLs marked as [ ] (not yet loaded).
If the file doesn't exist, create it from the URLs provided by the parent skill:
# Links to Load
## Unloaded
- [ ] https://mycompany.atlassian.net/browse/PROJ-123
- [ ] https://docs.google.com/document/d/abc123/edit
## Loaded
(none yet)
For each [ ] URL, call the MCP tool extract-linked-resources with the URL:
extract-linked-resources({ url: "https://mycompany.atlassian.net/browse/PROJ-123" })
This returns markdown with YAML frontmatter containing:
discoveredLinks in frontmatter (categorized: figma, confluence, jira, googleDocs)relationship on each link (parent, blocks, relates-to, etc.)hasMoreComments / commentsStartAt for comment paginationFigma URLs: If a Figma URL is passed, the tool returns a message to use figma-batch-load instead. Figma URLs should NOT be loaded by this skill.
Save the returned markdown+frontmatter directly as a file. The response is ready to write as-is:
.temp/cascade/context/
├── to-load.md ← loading manifest
├── jira-PROJ-123.md ← Jira issue (saved directly from tool response)
├── jira-PROJ-124.md ← linked Jira issue
├── confluence-page-title.md ← Confluence page
├── gdoc-document-title.md ← Google Doc
└── gsheet-spreadsheet-title.md ← Google Spreadsheet
File naming: Use the source type prefix + a slugified identifier:
jira-{issueKey}.mdconfluence-{slugified-page-title}.mdgdoc-{slugified-doc-title}.mdgsheet-{slugified-title}.mdRead the discoveredLinks YAML frontmatter from each saved file. Add any new URLs that aren’t already in to-load.md.
Mark loaded URLs as [x] and append any newly discovered URLs as [ ]:
# Links to Load
## Unloaded
- [ ] https://mycompany.atlassian.net/wiki/spaces/TEAM/pages/12345
- [ ] https://www.figma.com/design/abc123/Designs
## Loaded
- [x] https://mycompany.atlassian.net/browse/PROJ-123
- [x] https://docs.google.com/document/d/abc123/edit
## Figma (handled separately)
- https://www.figma.com/design/abc123/Designs?node-id=0-1
Important: Figma URLs should be listed in a separate "Figma" section — they are NOT loaded by this skill. The parent skill handles Figma loading via figma-batch-load.
Report back to the parent skill:
to-load.mdThe parent skill decides whether to call load-linked-resource-content again (if new links were discovered) or proceed to analysis.
figma-batch-load + curl/unzip, which the parent skill handlessummarize-document-content sub-skill's jobto-load.md if it's already there (loaded or unloaded)[!] in the manifest and continue with other URLs