Fetches web content from blocked or rate-limited sites (Reddit, LinkedIn, 403 errors) using Gemini CLI and curl when WebFetch fails. Manual /gemini-fetch or auto-triggers on errors.
From toolboxnpx claudepluginhub leejuoh/claude-code-zero --plugin toolboxThis skill is limited to using the following tools:
Implements Clean Architecture in Android and Kotlin Multiplatform projects: module layouts, dependency rules, UseCases, Repositories, domain models, and data layers with Room, SQLDelight, Ktor.
Builds new API connectors or providers by exactly matching the target repo's existing integration patterns. Use when adding integrations like Jira or Slack without inventing new architectures.
Delivers DB-free sandbox API regression tests for Next.js/Vitest to catch AI blind spots in self-reviewed code changes like API routes and backend logic.
Fetch web content from URLs that Claude Code's WebFetch can't access, using Gemini CLI as a proxy.
Manual (/gemini-fetch <url> [instruction]): URL comes from $0, optional instruction from $1.
Auto-trigger (WebFetch failure): The skill body is loaded as context. Extract the target URL from the conversation (the URL that WebFetch failed on) and any user instruction, then build the command yourself.
/gemini-fetch https://www.reddit.com/r/ClaudeAI/hot
/gemini-fetch https://www.reddit.com/r/ClaudeAI/hot "list top 10 post titles with scores"
/gemini-fetch https://example.com/blocked-page "extract the main content as markdown"
If manually invoked without a URL, display the examples above and stop.
Build and run the following command:
gemini -y -p "<prompt>" -o text 2>/dev/null
Important: Escape the URL for shell safety. Replace any single quotes in the URL with '\'' before embedding it in the prompt.
If an instruction is provided:
Use run_shell_command to run: curl -sL -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36' '<escaped-url>'
Then from the fetched content: <instruction>
If no instruction is provided:
Use run_shell_command to run: curl -sL -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36' '<escaped-url>'
Then convert the fetched HTML to clean, readable markdown. Preserve structure (headings, lists, links, code blocks). Remove navigation, ads, footers, and boilerplate. Return only the main content.
Set a Bash timeout of 90 seconds (timeout: 90000). Gemini CLI startup + model invocation takes time.
Present the fetched content directly to the user. Do not add wrapper commentary — just show what Gemini returned.
If the output contains error messages (e.g., "503", "quota exceeded", "model unavailable"), report the error and suggest:
/reddit-fetch if available)-y flag: Without it, Gemini can't use run_shell_command and falls back to google_web_search, which returns search results instead of the actual page content.-o text is essential: Without it, output includes ANSI escape codes and interactive UI elements that pollute the result.2>/dev/null discards all of it cleanly.', replace each with '\'' before embedding (standard shell escaping).google_web_search and reconstruct content from cached/mirrored sources. The result is accurate but may not be the exact original HTML — this is expected and usually good enough.