Firecrawl web scraping API via curl. Use this skill to scrape webpages, crawl websites, discover URLs, search the web, or extract structured data.
/plugin marketplace add vm0-ai/api0/plugin install api0@api0This skill inherits all available tools. When active, it can use any tool Claude has access to.
Use the Firecrawl API via direct curl calls to scrape websites and extract data for AI.
Official docs:
https://docs.firecrawl.dev/
Use this skill when you need to:
export FIRECRAWL_API_KEY="fc-your-api-key"
Important: When using
$VARin a command that pipes to another command, wrap the command containing$VARinbash -c '...'. Due to a Claude Code bug, environment variables are silently cleared when pipes are used directly.bash -c 'curl -s "https://api.example.com" -H "Authorization: Bearer $API_KEY"' | jq .
All examples below assume you have FIRECRAWL_API_KEY set.
Base URL: https://api.firecrawl.dev/v1
Extract content from a single webpage.
bash -c 'curl -s -X POST "https://api.firecrawl.dev/v1/scrape" -H "Authorization: Bearer ${FIRECRAWL_API_KEY}" -H "Content-Type: application/json" -d '"'"'{
"url": "https://example.com",
"formats": ["markdown"]
}'"'"' | jq .'
bash -c 'curl -s -X POST "https://api.firecrawl.dev/v1/scrape" -H "Authorization: Bearer ${FIRECRAWL_API_KEY}" -H "Content-Type: application/json" -d '"'"'{
"url": "https://docs.example.com/api",
"formats": ["markdown"],
"onlyMainContent": true,
"timeout": 30000
}'"'"' | jq '"'"'.data.markdown'"'"''
bash -c 'curl -s -X POST "https://api.firecrawl.dev/v1/scrape" -H "Authorization: Bearer ${FIRECRAWL_API_KEY}" -H "Content-Type: application/json" -d '"'"'{
"url": "https://example.com",
"formats": ["html"]
}'"'"' | jq '"'"'.data.html'"'"''
bash -c 'curl -s -X POST "https://api.firecrawl.dev/v1/scrape" -H "Authorization: Bearer ${FIRECRAWL_API_KEY}" -H "Content-Type: application/json" -d '"'"'{
"url": "https://example.com",
"formats": ["screenshot"]
}'"'"' | jq '"'"'.data.screenshot'"'"''
Scrape Parameters:
| Parameter | Type | Description |
|---|---|---|
url | string | URL to scrape (required) |
formats | array | markdown, html, rawHtml, screenshot, links |
onlyMainContent | boolean | Skip headers/footers |
timeout | number | Timeout in milliseconds |
Crawl all pages of a website (async operation).
bash -c 'curl -s -X POST "https://api.firecrawl.dev/v1/crawl" -H "Authorization: Bearer ${FIRECRAWL_API_KEY}" -H "Content-Type: application/json" -d '"'"'{
"url": "https://example.com",
"limit": 50,
"maxDepth": 2
}'"'"' | jq .'
Response:
{
"success": true,
"id": "crawl-job-id-here"
}
Replace {JOB_ID} with the actual job ID returned from the crawl request:
bash -c 'curl -s "https://api.firecrawl.dev/v1/crawl/{JOB_ID}" -H "Authorization: Bearer ${FIRECRAWL_API_KEY}"' | jq '{status, completed, total}'
Replace {JOB_ID} with the actual job ID:
bash -c 'curl -s "https://api.firecrawl.dev/v1/crawl/{JOB_ID}" -H "Authorization: Bearer ${FIRECRAWL_API_KEY}"' | jq '.data[] | {url: .metadata.url, title: .metadata.title}'
bash -c 'curl -s -X POST "https://api.firecrawl.dev/v1/crawl" -H "Authorization: Bearer ${FIRECRAWL_API_KEY}" -H "Content-Type: application/json" -d '"'"'{
"url": "https://blog.example.com",
"limit": 20,
"maxDepth": 3,
"includePaths": ["/posts/*"],
"excludePaths": ["/admin/*", "/login"]
}'"'"' | jq .'
Crawl Parameters:
| Parameter | Type | Description |
|---|---|---|
url | string | Starting URL (required) |
limit | number | Max pages to crawl (default: 100) |
maxDepth | number | Max crawl depth (default: 3) |
includePaths | array | Paths to include (e.g., /blog/*) |
excludePaths | array | Paths to exclude |
Get all URLs from a website quickly.
bash -c 'curl -s -X POST "https://api.firecrawl.dev/v1/map" -H "Authorization: Bearer ${FIRECRAWL_API_KEY}" -H "Content-Type: application/json" -d '"'"'{
"url": "https://example.com"
}'"'"' | jq '"'"'.links[:10]'"'"''
bash -c 'curl -s -X POST "https://api.firecrawl.dev/v1/map" -H "Authorization: Bearer ${FIRECRAWL_API_KEY}" -H "Content-Type: application/json" -d '"'"'{
"url": "https://shop.example.com",
"search": "product",
"limit": 500
}'"'"' | jq '"'"'.links'"'"''
Map Parameters:
| Parameter | Type | Description |
|---|---|---|
url | string | Website URL (required) |
search | string | Filter URLs containing keyword |
limit | number | Max URLs to return (default: 1000) |
Search the web and get full page content.
bash -c 'curl -s -X POST "https://api.firecrawl.dev/v1/search" -H "Authorization: Bearer ${FIRECRAWL_API_KEY}" -H "Content-Type: application/json" -d '"'"'{
"query": "AI news 2024",
"limit": 5
}'"'"' | jq '"'"'.data[] | {title: .metadata.title, url: .url}'"'"''
bash -c 'curl -s -X POST "https://api.firecrawl.dev/v1/search" -H "Authorization: Bearer ${FIRECRAWL_API_KEY}" -H "Content-Type: application/json" -d '"'"'{
"query": "machine learning tutorials",
"limit": 3,
"scrapeOptions": {
"formats": ["markdown"]
}
}'"'"' | jq '"'"'.data[] | {title: .metadata.title, content: .markdown[:500]}'"'"''
Search Parameters:
| Parameter | Type | Description |
|---|---|---|
query | string | Search query (required) |
limit | number | Number of results (default: 10) |
scrapeOptions | object | Options for scraping results |
Extract structured data from pages using AI.
bash -c 'curl -s -X POST "https://api.firecrawl.dev/v1/extract" -H "Authorization: Bearer ${FIRECRAWL_API_KEY}" -H "Content-Type: application/json" -d '"'"'{
"urls": ["https://example.com/product/123"],
"prompt": "Extract the product name, price, and description"
}'"'"' | jq '"'"'.data'"'"''
bash -c 'curl -s -X POST "https://api.firecrawl.dev/v1/extract" -H "Authorization: Bearer ${FIRECRAWL_API_KEY}" -H "Content-Type: application/json" -d '"'"'{
"urls": ["https://example.com/product/123"],
"prompt": "Extract product information",
"schema": {
"type": "object",
"properties": {
"name": {"type": "string"},
"price": {"type": "number"},
"currency": {"type": "string"},
"inStock": {"type": "boolean"}
}
}
}'"'"' | jq '"'"'.data'"'"''
bash -c 'curl -s -X POST "https://api.firecrawl.dev/v1/extract" -H "Authorization: Bearer ${FIRECRAWL_API_KEY}" -H "Content-Type: application/json" -d '"'"'{
"urls": [
"https://example.com/product/1",
"https://example.com/product/2"
],
"prompt": "Extract product name and price"
}'"'"' | jq '"'"'.data'"'"''
Extract Parameters:
| Parameter | Type | Description |
|---|---|---|
urls | array | URLs to extract from (required) |
prompt | string | Description of data to extract (required) |
schema | object | JSON schema for structured output |
bash -c 'curl -s -X POST "https://api.firecrawl.dev/v1/scrape" -H "Authorization: Bearer ${FIRECRAWL_API_KEY}" -H "Content-Type: application/json" -d '"'"'{
"url": "https://docs.python.org/3/tutorial/",
"formats": ["markdown"],
"onlyMainContent": true
}'"'"' | jq -r '"'"'.data.markdown'"'"' > python-tutorial.md'
bash -c 'curl -s -X POST "https://api.firecrawl.dev/v1/map" -H "Authorization: Bearer ${FIRECRAWL_API_KEY}" -H "Content-Type: application/json" -d '"'"'{
"url": "https://blog.example.com",
"search": "post"
}'"'"' | jq -r '"'"'.links[]'"'"''
bash -c 'curl -s -X POST "https://api.firecrawl.dev/v1/search" -H "Authorization: Bearer ${FIRECRAWL_API_KEY}" -H "Content-Type: application/json" -d '"'"'{
"query": "best practices REST API design 2024",
"limit": 5,
"scrapeOptions": {"formats": ["markdown"]}
}'"'"' | jq '"'"'.data[] | {title: .metadata.title, url: .url}'"'"''
bash -c 'curl -s -X POST "https://api.firecrawl.dev/v1/extract" -H "Authorization: Bearer ${FIRECRAWL_API_KEY}" -H "Content-Type: application/json" -d '"'"'{
"urls": ["https://example.com/pricing"],
"prompt": "Extract all pricing tiers with name, price, and features"
}'"'"' | jq '"'"'.data'"'"''
Replace {JOB_ID} with the actual job ID:
while true; do
STATUS="$(bash -c 'curl -s "https://api.firecrawl.dev/v1/crawl/{JOB_ID}" -H "Authorization: Bearer ${FIRECRAWL_API_KEY}"' | jq -r '.status')"
echo "Status: $STATUS"
[ "$STATUS" = "completed" ] && break
sleep 5
done
{
"success": true,
"data": {
"markdown": "# Page Title\n\nContent...",
"metadata": {
"title": "Page Title",
"description": "...",
"url": "https://..."
}
}
}
{
"success": true,
"status": "completed",
"completed": 50,
"total": 50,
"data": [...]
}
limit values to control API usageonlyMainContent: true for cleaner output/crawl/{id} for statussuccess field in responses