From grainulator
Fetches URLs and extracts main content (title, description, paragraphs) with 80-99% HTML size reduction via smart extraction. Use for ad-hoc web research and quick page previews.
npx claudepluginhub grainulation/grainulator --plugin grainulatorThis skill uses the workspace's default tool permissions.
Pulls a URL's main content (title, description, body paragraphs) without the HTML boilerplate. Delegates to silo's `smart-fetch` MCP tool, which strips scripts/styles/nav/footer and targets `<main>` or `<article>` regions. Typical reduction: 80-99% vs raw HTML.
Extracts clean Markdown from web pages by stripping navigation, ads, sidebars, footers, and boilerplate using Defuddle. Use for URLs to documentation, articles, blog posts, research papers, release notes.
Summarizes web content by fetching URLs, extracting quote-grounded key passages, and structuring output for documentation, articles, API references, READMEs. Reports partial accessibility.
Extracts clean markdown or text content from up to 20 URLs using Tavily CLI. Handles JavaScript-rendered pages, LLM-optimized output, and query-focused chunking for targeted extraction.
Share bugs, ideas, or general feedback.
Pulls a URL's main content (title, description, body paragraphs) without the HTML boilerplate. Delegates to silo's smart-fetch MCP tool, which strips scripts/styles/nav/footer and targets <main> or <article> regions. Typical reduction: 80-99% vs raw HTML.
$ARGUMENTS
Expected: /fetch <url> [--mode auto|concise|full|meta-only] [--no-cache] [--privacy]
--mode auto (default): tries concise extraction, falls back to full if quality degrades--mode concise: caps body at ~2KB--mode full: returns all extracted paragraphs--mode meta-only: only title + description (smallest)--no-cache: skip local cache read, force network fetch--privacy: don't write to cache (use for sensitive URLs)/fetch <url> beats opening a browser/witness (which creates a claim)/pull — structured API is better than HTML scraping/pull deepwiki — it already has a cleaner pathunsupported-content-typeCall mcp__silo__silo_smart-fetch with the URL and parsed flags.
If the response quality is "failed" (empty body, SPA, link list, HTTP error), tell the user:
--mode full or raw WebFetch as a fallbackDisplay the extracted content in a readable format:
Suggest next actions based on what the user seems to be doing:
/witness <claim_id> <url> --smart/research "<topic extracted from content>"/fetch <url> --mode full or open in a browserURL: https://example.com/article
Quality: high | Cached: no (first fetch)
Title: "Understanding Smart Fetch"
Size: 142.1 KB -> 2.3 KB (98% reduction)
Mode used: concise
Elapsed: 340ms
--- Content ---
[first 2KB of extracted main content]
Next steps:
/witness r003 <url> --smart -- corroborate a claim with this source
/fetch <url> --mode full -- get the full extracted body
silo cache stats -- see what's cached locally
| Rationalization | Reality |
|---|---|
| "Smart-fetch lost content" | Check the quality field. If "failed", retry with --mode full. If "degraded", the site may be a SPA — content depends on JS execution. |
| "I should always use full mode" | Full is fine for small pages but wasteful on long docs. auto handles the fallback for you. |
| "Cached content might be stale" | Default TTL is 7 days. Use --no-cache for latest, or silo cache purge <domain> to drop specific entries. |