From kostja94-marketing-skills-5
Optimizes site crawlability by fixing orphan pages, redirect chains, broken links, pagination vs infinite scroll, site structure, and AI crawler issues for SEO.
npx claudepluginhub joshuarweaver/cascade-data-analytics --plugin kostja94-marketing-skills-5This skill uses the workspace's default tool permissions.
Guides crawlability improvements: robots, X-Robots-Tag, site structure, and internal linking.
Conducts multi-round deep research on GitHub repos via API and web searches, generating markdown reports with executive summaries, timelines, metrics, and Mermaid diagrams.
Dynamically discovers and combines enabled skills into cohesive, unexpected delightful experiences like interactive HTML or themed artifacts. Activates on 'surprise me', inspiration, or boredom cues.
Generates images from structured JSON prompts via Python script execution. Supports reference images and aspect ratios for characters, scenes, products, visuals.
Guides crawlability improvements: robots, X-Robots-Tag, site structure, and internal linking.
When invoking: On first use, if helpful, open with 1–2 sentences on what this skill covers and why it matters, then provide the main output. On subsequent use or when the user asks to skip, go directly to the main output.
Check for project context first: If .claude/project-context.md or .cursor/project-context.md exists, read it for site structure.
Identify:
| Principle | Guideline |
|---|---|
| Depth | Important pages within 3–4 clicks from homepage |
| Orphan pages | Add internal links to pages with no incoming links; see internal-links for link strategy |
| Hierarchy | Logical structure; hub pages link to content |
Problem: With infinite scroll, crawlers cannot emulate user behavior (scroll, click "Load more"); content loaded after initial page load is not discoverable. Same applies to masonry + infinite scroll, lazy-loaded lists, and similar patterns.
Solution: Prefer pagination for key content. If keeping infinite scroll, make it search-friendly per Google's recommendations:
| Requirement | Practice |
|---|---|
| Component pages | Chunk content into paginated pages accessible without JavaScript |
| Full URLs | Each page has unique URL (e.g. ?page=1, ?lastid=567); avoid #1 |
| No overlap | Each item listed once in series; no duplication across pages |
| Direct access | URL works in new tab; no cookie/history dependency |
| pushState/replaceState | Update URL as user scrolls; enables back/forward, shareable links |
| 404 for out-of-bounds | ?page=999 returns 404 when only 998 pages exist |
Reference: Infinite scroll search-friendly recommendations (Google Search Central, 2014)
rel="prev" / rel="next" where applicableCrawl budget is the number of URLs Googlebot will crawl on your site in a given period. Large sites (10,000+ pages) may waste up to 30% of crawl budget on duplicates, redirects, and low-value URLs.
| Waste source | Fix |
|---|---|
| Duplicate URLs | Canonical; consolidate; 301 to preferred |
| Redirect chains | Point directly to final URL |
| Parameter proliferation | Use rel="canonical"; consider Clean-param (Yandex) |
| Low-value pages | noindex for thin/duplicate; see indexing |
| Crawl traps | Avoid infinite URL generation (e.g. faceted filters) |
Sitemap: Include only indexable, canonical URLs. See xml-sitemap, canonical-tag.
AI crawlers (GPTBot, ClaudeBot, PerplexityBot, etc.) now represent ~28% of Googlebot's crawl volume. Their behavior differs from search engines—optimizing for both improves GEO (AI search visibility). See generative-engine-optimization for GEO strategy. Vercel/MERJ study (Dec 2024):
| Factor | AI Crawlers (GPTBot, Claude) | Googlebot |
|---|---|---|
| JavaScript | Do not execute JS; cannot read client-side rendered content | Full JS rendering |
| 404 rate | ~34% of fetches hit 404s | ~8% |
| Redirects | ~14% of fetches follow redirects | ~1.5% |
| Content in initial HTML | JSON, RSC in initial response can be indexed | Same |
Recommendations for AI crawlability:
| Practice | Action |
|---|---|
| Server-side rendering | Critical content in initial HTML. Use SSR, ISR, or SSG. See rendering-strategies for full guide. |
| URL management | Keep sitemaps updated; use consistent URL patterns; avoid outdated /static/ assets that cause 404s. AI crawlers frequently hit outdated URLs. |
| Redirects | Fix redirect chains; point directly to final URL. AI crawlers waste ~14% of fetches on redirects. |
| 404 handling | Fix broken links; remove or redirect outdated URLs. High 404 rates suggest AI crawlers may use stale URL lists. |
Reference: The rise of the AI crawler (Vercel, 2024)
| Issue | Check |
|---|---|
| Redirect chains | Update links to point directly to final URL |
| Broken links | 301 or remove; audit internal and external |
| Orphan pages | Add internal links from hub or navigation; see internal-links for strategy |
| Infinite scroll | Provide paginated component pages; or replace with pagination for key content; see above |
| AI crawlers missing content | Ensure critical content in initial HTML; see rendering-strategies |