From seo-dungeon
Crawls up to 500 pages of a website, detects business type, delegates to up to 15 SEO specialist subagents (technical, content, schema, performance, etc.), generates health score and prioritized action plan.
npx claudepluginhub avalonreset/seo-dungeon --plugin seo-dungeon[url]This skill uses the workspace's default tool permissions.
1. **Fetch homepage**: use `scripts/fetch_page.py` to retrieve HTML
Crawls up to 500 pages of a website, detects business type, delegates to up to 15 SEO specialist subagents (technical, content, schema, performance, etc.), generates health score and prioritized action plan.
Orchestrates broad SEO audits of websites covering technical SEO, on-page SEO, schema, sitemaps, content quality, AI search readiness, and GEO. Use as umbrella for full audits via /seo audit <url>.
Share bugs, ideas, or general feedback.
scripts/fetch_page.py to retrieve HTMLseo-technical -- robots.txt, sitemaps, canonicals, Core Web Vitals, security headersseo-content -- E-E-A-T, readability, thin content, AI citation readinessseo-schema -- detection, validation, generation recommendationsseo-sitemap -- structure analysis, quality gates, missing pagesseo-performance -- LCP, INP, CLS measurementsseo-visual -- screenshots, mobile testing, above-fold analysisseo-geo -- AI crawler access, llms.txt, citability, brand mention signalsseo-local -- GBP signals, NAP consistency, reviews, local schema, industry-specific local factors (spawn when Local Service industry detected: brick-and-mortar, SAB, or hybrid business type)seo-maps -- Geo-grid rank tracking, GBP audit, review intelligence, competitor radius mapping (spawn when Local Service detected AND DataForSEO MCP available)seo-google -- CWV field data (CrUX), URL indexation (GSC), organic traffic (GA4) (spawn when Google API credentials detected via python scripts/google_auth.py --check)seo-backlinks -- Backlink profile data: DA/PA, referring domains, anchor text, toxic links (spawn when Moz or Bing API credentials detected via python scripts/backlinks_auth.py --check, or always include Common Crawl domain-level metrics)seo-cluster -- Semantic clustering analysis (spawn when content strategy signals detected: blog, pillar pages, topic clusters)seo-sxo -- Search experience analysis: page-type mismatch, user stories, persona scoring (always include in full audits)seo-drift -- Drift analysis: compare against stored baseline (spawn when drift baseline exists for the URL via python scripts/drift_history.py <url>)seo-ecommerce -- Product schema, marketplace intelligence (spawn when E-commerce industry detected)Max pages: 500
Respect robots.txt: Yes
Follow redirects: Yes (max 3 hops)
Timeout per page: 30 seconds
Concurrent requests: 5
Delay between requests: 1 second
FULL-AUDIT-REPORT.md: Comprehensive findingsACTION-PLAN.md: Prioritized recommendations (Critical > High > Medium > Low)screenshots/: Desktop + mobile captures (if Playwright available)scripts/google_report.py --type full. This produces a white-cover enterprise report with TOC, executive summary, charts (Lighthouse gauges, query bars, index donut), metric cards, threshold tables, prioritized recommendations with effort estimates, and implementation roadmap. Always offer PDF generation after completing an audit.| Category | Weight |
|---|---|
| Technical SEO | 22% |
| Content Quality | 23% |
| On-Page SEO | 20% |
| Schema / Structured Data | 10% |
| Performance (CWV) | 10% |
| AI Search Readiness | 10% |
| Images | 5% |
If DataForSEO MCP tools are available, spawn the seo-dataforseo agent alongside existing subagents to enrich the audit with live data: real SERP positions, backlink profiles with spam scores, on-page analysis (Lighthouse), business listings, and AI visibility checks (ChatGPT scraper, LLM mentions).
If Google API credentials are configured (python scripts/google_auth.py --check), spawn the seo-google agent to enrich the audit with real Google field data: CrUX Core Web Vitals (replaces lab-only estimates), GSC URL indexation status, search performance (clicks, impressions, CTR), and GA4 organic traffic trends. The Performance (CWV) category score benefits most from field data.
| Scenario | Action |
|---|---|
| URL unreachable (DNS failure, connection refused) | Report the error clearly. Do not guess site content. Suggest the user verify the URL and try again. |
| robots.txt blocks crawling | Report which paths are blocked. Analyze only accessible pages and note the limitation in the report. |
| Rate limiting (429 responses) | Back off and reduce concurrent requests. Report partial results with a note on which sections could not be completed. |
| Timeout on large sites (500+ pages) | Cap the crawl at the timeout limit. Report findings for pages crawled and estimate total site scope. |