npx claudepluginhub firecrawl/cliScrape, search, crawl, and map the web with a single command.
Share bugs, ideas, or general feedback.
Command-line interface for Firecrawl. Scrape, crawl, and extract data from any website directly from your terminal.
npm install -g firecrawl-cli
Or set up everything in one command (install CLI globally, authenticate, and add skills across all detected coding editors):
npx -y firecrawl-cli@latest init -y --browser
-y runs setup non-interactively--browser opens the browser for Firecrawl authentication automaticallyIf you are using an AI coding agent like Claude Code, you can also install the skill individually with:
firecrawl setup skills
This installs skills globally across all detected coding editors by default. Use --agent <agent> to scope it to one editor.
To install the Firecrawl MCP server into your editors (Cursor, Claude Code, VS Code, etc.):
firecrawl setup mcp
Or directly via npx:
npx skills add firecrawl/cli --full-depth --global --all
npx add-mcp "npx -y firecrawl-mcp" --name firecrawl
Just run a command - the CLI will prompt you to authenticate if needed:
firecrawl https://example.com
On first run, you'll be prompted to authenticate:
🔥 firecrawl cli
Turn websites into LLM-ready data
Welcome! To get started, authenticate with your Firecrawl account.
1. Login with browser (recommended)
2. Enter API key manually
Tip: You can also set FIRECRAWL_API_KEY environment variable
Enter choice [1/2]:
# Interactive (prompts automatically when needed)
firecrawl
# Browser login
firecrawl login
# Direct API key
firecrawl login --api-key fc-your-api-key
# Environment variable
export FIRECRAWL_API_KEY=fc-your-api-key
# Per-command API key
firecrawl scrape https://example.com --api-key fc-your-api-key
For self-hosted Firecrawl instances or local development, use the --api-url option:
# Use a local Firecrawl instance (no API key required)
firecrawl --api-url http://localhost:3002 scrape https://example.com
# Or set via environment variable
export FIRECRAWL_API_URL=http://localhost:3002
firecrawl scrape https://example.com
# Self-hosted with API key
firecrawl --api-url https://firecrawl.mycompany.com --api-key fc-xxx scrape https://example.com
When using a custom API URL (anything other than https://api.firecrawl.dev), authentication is automatically skipped, allowing you to use local instances without an API key.
scrape - Scrape URLsExtract content from any webpage. Pass multiple URLs to scrape them concurrently -- each result is saved to .firecrawl/ automatically.
# Basic usage (outputs markdown)
firecrawl https://example.com
firecrawl scrape https://example.com
# Get raw HTML
firecrawl https://example.com --html
firecrawl https://example.com -H
# Multiple formats (outputs JSON)
firecrawl https://example.com --format markdown,links,images
# Save to file
firecrawl https://example.com -o output.md
firecrawl https://example.com --format json -o data.json --pretty
# Multiple URLs (scraped concurrently, each saved to .firecrawl/)
firecrawl scrape https://firecrawl.dev https://firecrawl.dev/blog https://docs.firecrawl.dev
| Option | Description |
|---|---|
-f, --format <formats> | Output format(s), comma-separated |
-H, --html | Shortcut for --format html |
-S, --summary | Shortcut for --format summary |
--only-main-content | Extract only main content (removes navs, footers, etc.) |
--wait-for <ms> | Wait time before scraping (for JS-rendered content) |
--screenshot | Take a screenshot |
--full-page-screenshot | Take a full page screenshot |
--include-tags <tags> | Only include specific HTML tags |
--exclude-tags <tags> | Exclude specific HTML tags |
--max-age <milliseconds> | Maximum age of cached content in milliseconds |
-o, --output <path> | Save output to file |
--json | Output as JSON format |
--pretty | Pretty print JSON output |
--timing | Show request timing info |