By unclecode
Web crawling, extraction, screenshots, and URL discovery. Free local mode + cloud API. After installing, run /crawl4ai:setup to configure your API key or local mode.
npx claudepluginhub unclecode/crawl4ai-cloud-sdk --plugin crawl4aiScrape, search, crawl, and map the web with a single command.
Claude Code skill pack for FireCrawl (30 skills)
Firecrawl v2.5 API for web scraping/crawling to LLM-ready markdown. Use for site extraction, dynamic content, or encountering JavaScript rendering, bot detection, content loading errors.
Build AI applications with real-time web data using Tavily's search, extract, crawl, and research APIs.
The best web scraping tool for LLMs. USE --smart-extract to give your AI agent only the data it needs from any web page — extracts from JSON/HTML/XML/CSV/Markdown using path language with recursive search, filters, and regex. Handles JS, CAPTCHAs, anti-bot automatically. AI extraction in plain English. Google/Amazon/Walmart/YouTube/ChatGPT APIs. Batch, crawl, cron scheduling.
Parallel Web Search MCP and Task API integration for Claude Code. Provides web search, content extraction, deep research, data enrichment, entity discovery (FindAll), and web monitoring.
Admin access level
Server config contains admin-level keywords
Share bugs, ideas, or general feedback.
This is Uncle Code.
You know Crawl4AI. We built it together — you, me, and this incredible community. It became the de facto standard for self-hosting — the most-starred web crawler on GitHub. Battle-tested by developers worldwide. I am so proud of us.
We built it because access to data must be democratized. That principle hasn't changed.
Now I'm bringing it to the cloud.
Same principles. Same policies. But now:
The API you love, zero setup:
from crawl4ai_cloud import AsyncWebCrawler
async with AsyncWebCrawler(api_key="sk_live_...") as crawler:
result = await crawler.run("https://example.com")
print(result.markdown.raw_markdown)
That's it. Your existing Crawl4AI code works with minimal changes. See the SDK Documentation for Python, Node.js, and Go.
Here's the truth: Crawl4AI open source became so solid because you complained loudly — and I love that.
We need to do the same thing here.
This is version one. It will have issues. It will have bugs. The whole server might crash during the first week. But we love it. We're going to continue building every day.
I am a developer, and I love to build.
| Action | Link |
|---|---|
| Report a Bug | Open an Issue |
| Request a Feature | Open an Issue |
| Ask Questions | Join Discord |
| Share Feedback | Join Discord |
Every bug report, every feature request, every piece of feedback — it all makes this better.
Sign up for early access, get your API key (sk_live_...), and start crawling.
# Python
pip install crawl4ai-cloud-sdk
# Node.js
npm install crawl4ai-cloud
# Go
go get github.com/unclecode/crawl4ai-cloud-sdk/go
from crawl4ai_cloud import AsyncWebCrawler
async with AsyncWebCrawler(api_key="sk_live_...") as crawler:
result = await crawler.run("https://example.com")
print(result.markdown.raw_markdown)
Full SDK documentation: SDK.md
Crawling is just the beginning.
I've always had an eye for context and refined data — to make intelligence truly useful. Whether you're building an app, an agent, or something to help humanity, I hope Crawl4AI Cloud can be a part of it.
Own this plugin?
Verify ownership to unlock analytics, metadata editing, and a verified badge.
Sign in to claimOwn this plugin?
Verify ownership to unlock analytics, metadata editing, and a verified badge.
Sign in to claim