Execute FireCrawl primary workflow: Core Workflow A. Use when implementing primary use case, building main features, or core integration tasks. Trigger with phrases like "firecrawl main workflow", "primary task with firecrawl".
From firecrawl-packnpx claudepluginhub nickloveinvesting/nick-love-plugins --plugin firecrawl-packThis skill is limited to using the following tools:
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Details PluginEval's skill quality evaluation: 3 layers (static, LLM judge), 10 dimensions, rubrics, formulas, anti-patterns, badges. Use to interpret scores, improve triggering, calibrate thresholds.
Primary money-path workflow for FireCrawl. This is the most common use case. FireCrawl is a web scraping and crawling API that converts any website into clean, LLM-ready Markdown or structured data. It handles JavaScript rendering, login-gated pages, and pagination automatically, which removes the need for custom browser automation scripts when you need to extract content from modern web applications.
firecrawl-install-auth setupConnect to the FireCrawl API with your API key and specify the target URL you want to scrape. Choose between single-page scrape mode for individual URLs and full-crawl mode for entire sites. Configure output format preferences (Markdown, HTML, or structured JSON) and set any include or exclude URL path patterns to focus the crawl scope.
// Step 1 implementation
Submit the scrape or crawl job and monitor its progress via the job status endpoint. FireCrawl queues the work asynchronously for multi-page crawls. Poll for completion or set up a webhook callback. Once complete, retrieve the extracted content and validate that it covers the pages you intended, checking for any blocked or failed URLs.
// Step 2 implementation
Post-process the extracted Markdown or JSON: strip navigation boilerplate if present, split long documents into chunks suitable for embedding, and store the results in your vector database or knowledge base. Record the crawl metadata (URLs visited, extraction timestamp, token count) for provenance tracking.
// Step 3 implementation
| Error | Cause | Solution |
|---|---|---|
| Error 1 | Cause | Solution |
| Error 2 | Cause | Solution |
// Complete workflow example
For secondary workflow, see firecrawl-core-workflow-b.