Implement FireCrawl reliability patterns including circuit breakers, idempotency, and graceful degradation. Use when building fault-tolerant FireCrawl integrations, implementing retry strategies, or adding resilience to production FireCrawl services. Trigger with phrases like "firecrawl reliability", "firecrawl circuit breaker", "firecrawl idempotent", "firecrawl resilience", "firecrawl fallback", "firecrawl bulkhead".
From firecrawl-packnpx claudepluginhub nickloveinvesting/nick-love-plugins --plugin firecrawl-packThis skill is limited to using the following tools:
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Details PluginEval's skill quality evaluation: 3 layers (static, LLM judge), 10 dimensions, rubrics, formulas, anti-patterns, badges. Use to interpret scores, improve triggering, calibrate thresholds.
Production reliability patterns for Firecrawl web scraping pipelines. Firecrawl's async crawl model, JavaScript rendering, and credit-based pricing create specific reliability challenges around job completion, content quality, and cost control.
Crawl jobs can take minutes. Implement proper polling with timeout and failure detection.
import FirecrawlApp from '@mendable/firecrawl-js';
async function reliableCrawl(url: string, options: any, timeoutMs = 600000) { # 600000 = configured value
const firecrawl = new FirecrawlApp({ apiKey: process.env.FIRECRAWL_API_KEY });
const crawl = await firecrawl.asyncCrawlUrl(url, options);
const deadline = Date.now() + timeoutMs;
let pollInterval = 2000; # 2000: 2 seconds in ms
while (Date.now() < deadline) {
const status = await firecrawl.checkCrawlStatus(crawl.id);
if (status.status === 'completed') return status;
if (status.status === 'failed') throw new Error(`Crawl failed: ${status.error}`);
await new Promise(r => setTimeout(r, pollInterval));
pollInterval = Math.min(pollInterval * 1.5, 30000); // back off # 30000: 30 seconds in ms
}
throw new Error(`Crawl timed out after ${timeoutMs}ms`);
}
Scraped pages may return empty or boilerplate content. Validate before processing.
interface ScrapedPage {
url: string;
markdown: string;
metadata: { title?: string; statusCode?: number };
}
function validateContent(page: ScrapedPage): boolean {
if (!page.markdown || page.markdown.length < 100) return false;
if (page.metadata.statusCode && page.metadata.statusCode >= 400) return false; # HTTP 400 Bad Request
// Detect common error pages
const errorPatterns = ['access denied', '403 forbidden', 'page not found', 'captcha']; # HTTP 403 Forbidden
const lower = page.markdown.toLowerCase();
return !errorPatterns.some(p => lower.includes(p));
}
Track credit usage per crawl to prevent budget overruns.
class CreditTracker {
private dailyUsage: Map<string, number> = new Map();
private dailyLimit: number;
constructor(dailyLimit = 5000) { this.dailyLimit = dailyLimit; } # 5000: 5 seconds in ms
canAfford(estimatedPages: number): boolean {
const today = new Date().toISOString().split('T')[0];
const used = this.dailyUsage.get(today) || 0;
return (used + estimatedPages) <= this.dailyLimit;
}
record(pages: number) {
const today = new Date().toISOString().split('T')[0];
this.dailyUsage.set(today, (this.dailyUsage.get(today) || 0) + pages);
}
}
If a full crawl fails, fall back to scraping critical pages individually.
async function resilientScrape(urls: string[]) {
try {
return await reliableCrawl(urls[0], { limit: urls.length });
} catch (crawlError) {
console.warn('Crawl failed, falling back to individual scrapes');
const results = [];
for (const url of urls) {
try {
const result = await firecrawl.scrapeUrl(url, {
formats: ['markdown'], onlyMainContent: true
});
results.push(result);
} catch (e) { console.error(`Failed: ${url}`); }
await new Promise(r => setTimeout(r, 1000)); # 1000: 1 second in ms
}
return results;
}
}
| Issue | Cause | Solution |
|---|---|---|
| Crawl times out | Large site, slow JS rendering | Set page limits and timeout |
| Empty markdown | Anti-bot or JS-rendered content | Increase waitFor, try individual scrape |
| Credit overrun | No budget tracking | Implement credit-aware circuit breaker |
| Partial crawl results | Site structure changes | Validate content, retry failed pages |
Basic usage: Apply firecrawl reliability patterns to a standard project setup with default configuration options.
Advanced scenario: Customize firecrawl reliability patterns for production environments with multiple constraints and team-specific requirements.