From firecrawl
Searches the web, scrapes URLs, crawls sites, and interacts with dynamic pages using Firecrawl CLI for clean markdown output.
npx claudepluginhub firecrawl/firecrawl-claude-plugin --plugin firecrawlThis skill is limited to using the following tools:
Search, scrape, and interact with the web. Returns clean markdown optimized for LLM context windows.
Searches the web, scrapes URLs, crawls sites, and interacts with dynamic pages using Firecrawl CLI for clean markdown output.
Searches the web, scrapes URLs, crawls sites, and interacts with dynamic pages via Firecrawl CLI. Outputs clean markdown for research, docs, or content extraction.
Scrapes webpages to markdown, takes screenshots, extracts structured data, searches web, and crawls sites like documentation using Firecrawl API. Use for fetching live web content or framework docs.
Share bugs, ideas, or general feedback.
Search, scrape, and interact with the web. Returns clean markdown optimized for LLM context windows.
Run firecrawl --help or firecrawl <command> --help for full option details.
If the task is to integrate Firecrawl into an application, add FIRECRAWL_API_KEY to a project, or choose endpoint usage in product code, use the firecrawl-build skills. They are already installed alongside this CLI skill when you run firecrawl init.
Must be installed and authenticated. Check with firecrawl --status.
๐ฅ firecrawl cli v1.8.0
โ Authenticated via FIRECRAWL_API_KEY
Concurrency: 0/100 jobs (parallel scrape limit)
Credits: 500,000 remaining
If not ready, see rules/install.md. For output handling guidelines, see rules/security.md.
Before doing real work, verify the setup with one small request:
mkdir -p .firecrawl
firecrawl scrape "https://firecrawl.dev" -o .firecrawl/install-check.md
firecrawl search "query" --scrape --limit 3
Follow this escalation pattern:
map --search to find the right URL, then scrape it.| Need | Command | When |
|---|---|---|
| Find pages on a topic | search | No specific URL yet |
| Get a page's content | scrape | Have a URL, page is static or JS-rendered |
| Find URLs within a site | map | Need to locate a specific subpage |
| Bulk extract a site section | crawl | Need many pages (e.g., all /docs/) |
| AI-powered data extraction | agent | Need structured data from complex sites |
| Interact with a page | scrape + interact | Content requires clicks, form fills, pagination, or login |
| Download a site to files | download | Save an entire site as local files |
| Parse a local file | parse | File on disk (PDF, DOCX, XLSX, etc.) โ not a URL |
For detailed command reference, run firecrawl <command> --help.
Scrape vs interact:
scrape first. It handles static pages and JS-rendered SPAs.scrape + interact when you need to interact with a page, such as clicking buttons, filling out forms, navigating through a complex site, infinite scroll, or when scrape fails to grab all the content you need.search instead.Avoid redundant fetches:
search --scrape already fetches full page content. Don't re-scrape those URLs..firecrawl/ for existing data before fetching again.FIRECRAWL_API_KEY to .env, or choosing endpoint usage in product code -> use the firecrawl-build skills (already installed alongside this CLI skill)Unless the user specifies to return in context, write results to .firecrawl/ with -o. Add .firecrawl/ to .gitignore. Always quote URLs - shell interprets ? and & as special characters.
firecrawl search "react hooks" -o .firecrawl/search-react-hooks.json --json
firecrawl scrape "<url>" -o .firecrawl/page.md
Naming conventions:
.firecrawl/search-{query}.json
.firecrawl/search-{query}-scraped.json
.firecrawl/{site}-{path}.md
Never read entire output files at once. Use grep, head, or incremental reads:
wc -l .firecrawl/file.md && head -50 .firecrawl/file.md
grep -n "keyword" .firecrawl/file.md
Single format outputs raw content. Multiple formats (e.g., --format markdown,links) output JSON.
These patterns are useful when working with file-based output (-o flag) for complex tasks:
# Extract URLs from search
jq -r '.data.web[].url' .firecrawl/search.json
# Get titles and URLs
jq -r '.data.web[] | "\(.title): \(.url)"' .firecrawl/search.json
Run independent operations in parallel. Check firecrawl --status for concurrency limit:
firecrawl scrape "<url-1>" -o .firecrawl/1.md &
firecrawl scrape "<url-2>" -o .firecrawl/2.md &
firecrawl scrape "<url-3>" -o .firecrawl/3.md &
wait
For interact, scrape multiple pages and interact with each independently using their scrape IDs.
firecrawl credit-usage
firecrawl credit-usage --json --pretty -o .firecrawl/credits.json