From firecrawl-pack
Upgrades @mendable/firecrawl-js SDK and migrates from v0/v1 to v2 API, fixing crawlUrl, scrapeUrl, async methods, and extract schemas.
npx claudepluginhub jeremylongshore/claude-code-plugins-plus-skills --plugin firecrawl-packThis skill is limited to using the following tools:
!`npm list @mendable/firecrawl-js 2>/dev/null | grep firecrawl || echo 'Not installed'`
Installs Firecrawl SDK and configures API key for web scraping in Node.js or Python projects. Verifies connection with code examples.
Safely upgrades Apify SDK, apify-client, and Crawlee packages with v2-v3 migration handling, import/config fixes, and test validation.
Upgrades webflow-api SDK from v1 to v3: detects breaking changes, migrates v1-to-v2 API endpoints/auth, maps operations like sites/collections/items. Useful for Webflow integrations.
Share bugs, ideas, or general feedback.
!npm list @mendable/firecrawl-js 2>/dev/null | grep firecrawl || echo 'Not installed'
Guide for upgrading @mendable/firecrawl-js SDK versions and migrating from Firecrawl API v0/v1 to v2. Covers breaking changes in import paths, method signatures, response formats, and the new extract v2 schema format.
| SDK Version | API Version | Key Changes |
|---|---|---|
| 1.x | v1 | asyncCrawlUrl, checkCrawlStatus, mapUrl added |
| 0.x | v0 | Legacy crawlUrl with waitUntilDone param |
set -euo pipefail
# Check installed version
npm list @mendable/firecrawl-js
# Check latest available
npm view @mendable/firecrawl-js version
set -euo pipefail
git checkout -b upgrade/firecrawl-sdk
npm install @mendable/firecrawl-js@latest
npm test
// No change needed — import has been stable
import FirecrawlApp from "@mendable/firecrawl-js";
// BEFORE (v0): crawlUrl with waitUntilDone
const result = await firecrawl.crawlUrl("https://example.com", {
crawlerOptions: { limit: 50 },
pageOptions: { onlyMainContent: true },
waitUntilDone: true,
});
// AFTER (v1+): crawlUrl returns synchronously, or use asyncCrawlUrl
const result = await firecrawl.crawlUrl("https://example.com", {
limit: 50,
scrapeOptions: {
formats: ["markdown"],
onlyMainContent: true,
},
});
// For large crawls, use async with polling
const job = await firecrawl.asyncCrawlUrl("https://example.com", {
limit: 500,
scrapeOptions: { formats: ["markdown"] },
});
const status = await firecrawl.checkCrawlStatus(job.id);
// BEFORE (v0)
await firecrawl.scrapeUrl("https://example.com", {
pageOptions: { onlyMainContent: true },
extractorOptions: { mode: "llm-extraction", schema: mySchema },
});
// AFTER (v1+)
await firecrawl.scrapeUrl("https://example.com", {
formats: ["markdown", "extract"],
onlyMainContent: true,
extract: { schema: mySchema },
});
// BEFORE (v1): extract as top-level option
await firecrawl.scrapeUrl(url, {
formats: ["extract"],
extract: { schema: { type: "object", ... } },
});
// AFTER (v2): schema embedded in formats array
// Note: SDK handles this internally, but REST API changed
// POST /v2/extract with { urls: [...], schema: {...} }
// mapUrl — fast URL discovery (not available in v0)
const map = await firecrawl.mapUrl("https://example.com");
console.log(map.links);
// batchScrapeUrls — scrape multiple URLs at once
const batch = await firecrawl.batchScrapeUrls(
["https://a.com", "https://b.com"],
{ formats: ["markdown"] }
);
// asyncBatchScrapeUrls + checkBatchScrapeStatus
const job = await firecrawl.asyncBatchScrapeUrls(urls, { formats: ["markdown"] });
const status = await firecrawl.checkBatchScrapeStatus(job.id);
set -euo pipefail
npm test
# Quick integration check
npx tsx -e "
import FirecrawlApp from '@mendable/firecrawl-js';
const fc = new FirecrawlApp({ apiKey: process.env.FIRECRAWL_API_KEY! });
const r = await fc.scrapeUrl('https://example.com', { formats: ['markdown'] });
console.log('Success:', r.success, 'Chars:', r.markdown?.length);
"
set -euo pipefail
# Pin to previous version
npm install @mendable/firecrawl-js@1.x.x --save-exact
npm test
crawlerOptions / pageOptions → flat options + scrapeOptionswaitUntilDone: true → use crawlUrl (sync) or asyncCrawlUrl + pollingextractorOptions → extract with schema or promptdata array for crawl results, markdown/html for scrapemapUrl, batchScrapeUrls, asyncBatchScrapeUrls| Issue | Cause | Solution |
|---|---|---|
crawlerOptions is not valid | Using v0 params on v1+ | Flatten to top-level options |
waitUntilDone is not valid | Removed in v1 | Use asyncCrawlUrl + checkCrawlStatus |
pageOptions not recognized | Renamed in v1 | Use scrapeOptions inside crawl |
Missing mapUrl method | SDK too old | Upgrade to latest version |
For CI integration during upgrades, see firecrawl-ci-integration.