From tw93-claude-health
Fetches any URL or PDF as clean Markdown, handling paywalls, JS-heavy pages, Twitter/X, and Chinese platforms via proxy cascade. Saves to ~/Downloads; prefer over WebFetch.
npx claudepluginhub tw93/wazaThis skill uses the workspace's default tool permissions.
Prefix your first line with 🥷 inline, not as its own paragraph.
Fetches any URL as clean Markdown via proxy cascade (r.jina.ai, defuddle.md, agent-fetch). Handles WeChat articles with Playwright and Feishu/Lark docs with API credentials.
Extracts clean Markdown from any URL using ezycopy CLI. Handles JS-rendered pages with headless Chrome, retries on failure, and auto-installs tool if needed.
Fetches any URL and converts to markdown using baoyu-fetch CLI (Chrome CDP with adapters for X/Twitter, YouTube transcripts, Hacker News, Defuddle). Handles login/CAPTCHA waits for saving webpages.
Share bugs, ideas, or general feedback.
Prefix your first line with 🥷 inline, not as its own paragraph.
Convert any URL or local PDF to clean Markdown and save it. No analysis, no summary, no discussion of the content unless explicitly asked.
| Input | Method |
|---|---|
feishu.cn, larksuite.com | Feishu API script |
mp.weixin.qq.com | Proxy cascade first, built-in WeChat article script only if the proxies fail |
.pdf URL or local PDF path | PDF extraction |
GitHub URLs (github.com, raw.githubusercontent.com) | Prefer raw content or gh first. Use the proxy cascade only as fallback. |
x.com, twitter.com | Proxy cascade (r.jina.ai keeps image URLs). Do not try WebFetch; it 402s. |
| Everything else | Proxy cascade |
After routing, load references/read-methods.md and run the commands for the chosen method.
Title: {title}
Author: {author} (if available)
Source: {platform}
URL: {original url}
Content
{full Markdown, truncated at 200 lines if long}
Save to ~/Downloads/{title}.md with YAML frontmatter by default.
Skip only if user says "just preview" or "don't save". Tell the user the saved path.
If ~/Downloads/{title}.md already exists, append -1, -2, etc., to the filename. Never overwrite an existing file without explicit confirmation.
By default only save Markdown. Download images only when the user explicitly asks: "download images", "save images", "带图", "下载图片", or similar.
When asked, after saving the Markdown:
grep -oE 'https?://[^ )"]+\.(jpg|jpeg|png|webp|gif)' {md_path} | sort -u~/Downloads/{title}-images/ and curl each URL in parallel (& + wait). Use the same proxy env vars as the fetch step.| What happened | Rule |
|---|---|
| Fetched a paywalled article and returned a login page as Markdown | Inspect the first 10 lines for paywall signals ("Subscribe", "Sign in", "Continue reading"). If found, stop and warn the user. Do not save the login page. |
| r.jina.ai or defuddle.md returned empty for a JS-heavy site | Try the local fallback (agent-fetch or defuddle parse) before giving up. |
| Network failures | Prepend local proxy env vars if available and retry once. |
| Long content | Preview with head -n 200 first; mention truncation when reporting the save. |
| Local fallback tools returned JSON | Extract the Markdown-bearing field. Raw JSON is not a valid final output for /read. |
| All methods failed | Stop and tell the user what was tried and what failed. Suggest opening the URL in a browser or providing an alternative. Do not silently return empty or partial results. |
Activate when: "extract content", "reformat this document", or user hands over a document to restyle
Extract and tag:
Output: Clean, tagged content ready to feed into kami or other typesetting tools.