Scrape websites using Firecrawl MCP and save content to research folders
Scrape websites using Firecrawl MCP and automatically save organized content to research folders with metadata. Use this when you need to capture web content for documentation, research, or reference purposes.
/plugin marketplace add AojdevStudio/dev-utils-marketplace/plugin install research-intelligence@dev-utils-marketplaceclaude-sonnet-4-5-20250929This command scrapes websites using the Firecrawl MCP and intelligently saves the content to organized research folders within the desktop-commander documentation system.
$ARGUMENTS
Usage Examples:
/scrape-site https://docs.anthropic.com/claude/guide - Scrape and auto-organize in research folder/scrape-site https://example.com/api "api-docs" - Scrape and save to specific subfolder/scrape-site https://github.com/owner/repo/wiki "github-wiki" - Save with custom folder name$ARGUMENTS (first argument is always the URL to scrape)docs/research/[domain-or-subfolder]/docs/research/ (organized by domain/topic)ls -la docs/context7-research/ docs/research/ 2>/dev/null | head -10date "+%Y-%m-%d"