Scrape websites using Firecrawl MCP and save content to research folders
From research-intelligencenpx claudepluginhub aojdevstudio/dev-utils-marketplace --plugin research-intelligenceclaude-sonnet-4-5-20250929/scrape-siteScrape websites using Firecrawl MCP and save content to research folders
This command scrapes websites using the Firecrawl MCP and intelligently saves the content to organized research folders within the desktop-commander documentation system.
$ARGUMENTS
Usage Examples:
/scrape-site https://docs.anthropic.com/claude/guide - Scrape and auto-organize in research folder/scrape-site https://example.com/api "api-docs" - Scrape and save to specific subfolder/scrape-site https://github.com/owner/repo/wiki "github-wiki" - Save with custom folder name$ARGUMENTS (first argument is always the URL to scrape)docs/research/[domain-or-subfolder]/docs/research/ (organized by domain/topic)ls -la docs/context7-research/ docs/research/ 2>/dev/null | head -10date "+%Y-%m-%d"