From firecrawl
Downloads entire websites as local files in markdown, screenshots, or multiple formats per page. Maps site first, scrapes to organized .firecrawl/ directories for offline docs or bulk content extraction.
npx claudepluginhub firecrawl/firecrawl-claude-plugin --plugin firecrawlThis skill is limited to using the following tools:
> **Experimental.** Convenience command that combines `map` + `scrape` to save an entire site as local files.
Crawls websites to extract content from multiple pages via Tavily CLI. Saves pages as local markdown files with depth/breadth limits, path filtering, and semantic instructions. Use for bulk doc downloads or site content collection.
Downloads files from websites preserving browser sessions and cookies using openbrowser-ai, handles PDFs/CSVs/images, extracts text from PDFs with pypdf. For authenticated file fetches and saves.
Scrapes webpages to markdown, takes screenshots, extracts structured data, searches web, and crawls sites like documentation using Firecrawl API. Use for fetching live web content or framework docs.
Share bugs, ideas, or general feedback.
Experimental. Convenience command that combines
map+scrapeto save an entire site as local files.
Maps the site first to discover pages, then scrapes each one into nested directories under .firecrawl/. All scrape options work with download. Always pass -y to skip the confirmation prompt.
# Interactive wizard (picks format, screenshots, paths for you)
firecrawl download https://docs.example.com
# With screenshots
firecrawl download https://docs.example.com --screenshot --limit 20 -y
# Multiple formats (each saved as its own file per page)
firecrawl download https://docs.example.com --format markdown,links --screenshot --limit 20 -y
# Creates per page: index.md + links.txt + screenshot.png
# Filter to specific sections
firecrawl download https://docs.example.com --include-paths "/features,/sdks"
# Skip translations
firecrawl download https://docs.example.com --exclude-paths "/zh,/ja,/fr,/es,/pt-BR"
# Full combo
firecrawl download https://docs.example.com \
--include-paths "/features,/sdks" \
--exclude-paths "/zh,/ja" \
--only-main-content \
--screenshot \
-y
| Option | Description |
|---|---|
--limit <n> | Max pages to download |
--search <query> | Filter URLs by search query |
--include-paths <paths> | Only download matching paths |
--exclude-paths <paths> | Skip matching paths |
--allow-subdomains | Include subdomain pages |
-y | Skip confirmation prompt (always use in automated flows) |
-f <formats>, -H, -S, --screenshot, --full-page-screenshot, --only-main-content, --include-tags, --exclude-tags, --wait-for, --max-age, --country, --languages