Technical SEO audit with GEO-specific checks — crawlability, indexability, security, performance, SSR, and AI crawler access
From muggle-ai-teamsnpx claudepluginhub multiplex-ai/muggle-ai-teams --plugin muggle-ai-teamsThis skill uses the workspace's default tool permissions.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Executes pre-written implementation plans: critically reviews, follows bite-sized steps exactly, runs verifications, tracks progress with checkpoints, uses git worktrees, stops on blockers.
Guides idea refinement into designs: explores context, asks questions one-by-one, proposes approaches, presents sections for approval, writes/review specs before coding.
Technical SEO forms the foundation of both traditional search visibility and AI search citation. A technically broken site cannot be crawled, indexed, or cited by any platform. This skill audits 8 categories of technical health with specific attention to GEO requirements — most critically, server-side rendering (AI crawlers do not execute JavaScript) and AI crawler access (many sites inadvertently block AI crawlers in robots.txt).
https://[domain]/robots.txtUser-agent, Allow, Disallow directivesSitemap: https://[domain]/sitemap.xmlCheck robots.txt for directives targeting these AI crawlers:
| Crawler | User-Agent | Platform |
|---|---|---|
| GPTBot | GPTBot | ChatGPT / OpenAI |
| Google-Extended | Google-Extended | Gemini / Google AI training |
| Googlebot | Googlebot | Google Search + AI Overviews |
| Bingbot | bingbot | Bing Copilot + ChatGPT (via Bing) |
| PerplexityBot | PerplexityBot | Perplexity AI |
| ClaudeBot | ClaudeBot | Anthropic Claude |
| Amazonbot | Amazonbot | Alexa / Amazon AI |
| CCBot | CCBot | Common Crawl (used by many AI models) |
| FacebookBot | FacebookExternalHit | Meta AI |
| Bytespider | Bytespider | TikTok / ByteDance AI |
| Applebot-Extended | Applebot-Extended | Apple Intelligence |
Scoring for AI crawler access:
Important nuance: Blocking Google-Extended does NOT block Googlebot. Google-Extended only controls AI training data usage, not search indexing. However, blocking Google-Extended may reduce presence in AI Overviews. Recommend allowing Google-Extended unless there is a specific data licensing concern.
/sitemap.xml, /sitemap_index.xml)<lastmod> dates (should be present and accurate)<meta name="robots" content="noindex"> on pages that SHOULD be indexedX-Robots-Tag: noindex HTTP headersCategory Scoring:
| Check | Points |
|---|---|
| robots.txt valid and complete | 3 |
| AI crawlers allowed | 5 |
| XML sitemap present and valid | 3 |
| Crawl depth within 3 clicks | 2 |
| No erroneous noindex directives | 2 |
<link rel="canonical" href="..."> tag?sort=price creating duplicate pages)rel="next" / rel="prev" (note: Google ignores these as of 2019, but Bing still uses them)rel="canonical" on paginated pages pointing to a view-all page or the first page<link rel="alternate" hreflang="xx"> tagssite:domain.com estimate)Category Scoring:
| Check | Points |
|---|---|
| Canonical tags correct on all pages | 3 |
| No duplicate content issues | 3 |
| Pagination handled correctly | 2 |
| Hreflang correct (if applicable) | 2 |
| No index bloat | 2 |
Check HTTP response headers for:
| Header | Required Value | Purpose |
|---|---|---|
Strict-Transport-Security | max-age=31536000; includeSubDomains | Forces HTTPS |
Content-Security-Policy | Appropriate policy | Prevents XSS |
X-Content-Type-Options | nosniff | Prevents MIME sniffing |
X-Frame-Options | DENY or SAMEORIGIN | Prevents clickjacking |
Referrer-Policy | strict-origin-when-cross-origin or stricter | Controls referrer data |
Permissions-Policy | Appropriate restrictions | Controls browser features |
Category Scoring:
| Check | Points |
|---|---|
| HTTPS enforced with valid cert | 4 |
| HSTS header present | 2 |
| X-Content-Type-Options | 1 |
| X-Frame-Options | 1 |
| Referrer-Policy | 1 |
| Content-Security-Policy | 1 |
/blog/seo-guide not /blog?id=12345/category/subcategory/pagerobots.txt Disallow for parameter variationsCategory Scoring:
| Check | Points |
|---|---|
| Clean, readable URLs | 2 |
| Logical hierarchy | 2 |
| No redirect chains (max 1 hop) | 2 |
| Parameter handling configured | 2 |
As of July 2024, Google crawls ALL sites exclusively with mobile Googlebot. There is no desktop crawling. If your site does not work on mobile, it does not work for Google. Period.
<meta name="viewport" content="width=device-width, initial-scale=1">Category Scoring:
| Check | Points |
|---|---|
| Viewport meta tag correct | 3 |
| Responsive layout (no horizontal scroll) | 3 |
| Tap targets appropriately sized | 2 |
| Font sizes legible | 2 |
Core Web Vitals use the 75th percentile of real user data (field data) as the benchmark. Lab data is useful for debugging but field data determines the ranking signal.
| Metric | Good | Needs Improvement | Poor | Notes |
|---|---|---|---|---|
| LCP (Largest Contentful Paint) | < 2.5s | 2.5s - 4.0s | > 4.0s | Measures loading — time until largest visible element renders |
| INP (Interaction to Next Paint) | < 200ms | 200ms - 500ms | > 500ms | Replaced FID in March 2024. Measures ALL interactions, not just first |
| CLS (Cumulative Layout Shift) | < 0.1 | 0.1 - 0.25 | > 0.25 | Measures visual stability — unexpected layout movements |
When real user data is unavailable, estimate from page characteristics:
<link rel="preload">requestIdleCallback or scheduler.yield()content-visibility: auto for off-screen contentwidth and height attributes on images and videosaspect-ratio or explicit dimensionsfont-display: swap with size-adjusted fallback fontsCategory Scoring:
| Check | Points |
|---|---|
| LCP < 2.5s | 5 |
| INP < 200ms | 5 |
| CLS < 0.1 | 5 |
AI crawlers (GPTBot, PerplexityBot, ClaudeBot, etc.) do NOT execute JavaScript. They fetch the raw HTML and parse it. If your content is rendered client-side by React, Vue, Angular, or any other JavaScript framework, AI crawlers see an empty page.
Even Googlebot, which does execute JavaScript, deprioritizes JS-rendered content due to the additional crawl budget required. Google processes JS rendering in a separate "rendering queue" that can delay indexing by days or weeks.
curl -s [URL]| Framework | SSR Solution |
|---|---|
| React | Next.js (SSR/SSG), Remix, Gatsby (SSG) |
| Vue | Nuxt.js (SSR/SSG) |
| Angular | Angular Universal |
| Svelte | SvelteKit |
| Generic | Prerender.io (prerendering service), Rendertron |
Category Scoring:
| Check | Points |
|---|---|
| Main content in raw HTML | 8 |
| Meta tags + structured data in raw HTML | 4 |
| Internal links in raw HTML | 3 |
curl -o /dev/null -s -w 'TTFB: %{time_starttransfer}s\n' [URL]loading="lazy"async or defer)<head>Cache-Control headers on static resources (images, CSS, JS)max-age=31536000 (1 year) with content-hashed filenamesno-cache with validation (ETag or Last-Modified)CF-Ray (Cloudflare), X-Cache (AWS CloudFront), X-Served-By (Fastly)Category Scoring:
| Check | Points |
|---|---|
| TTFB < 800ms | 3 |
| Page weight < 2MB | 2 |
| Images optimized (format, size, lazy) | 3 |
| JS bundles reasonable (< 200KB compressed) | 2 |
| Compression enabled (gzip/brotli) | 2 |
| Cache headers on static resources | 2 |
| CDN in use | 1 |
IndexNow is an open protocol that allows websites to notify search engines instantly when content is created, updated, or deleted. Supported by Bing, Yandex, Seznam, and Naver. Google does NOT support IndexNow but monitors the protocol.
ChatGPT uses Bing's index. Bing Copilot uses Bing's index. Faster Bing indexing means faster AI visibility on two major platforms.
https://[domain]/.well-known/indexnow-key.txt or similar| Category | Max Points | Weight |
|---|---|---|
| Crawlability | 15 | Core foundation |
| Indexability | 12 | Core foundation |
| Security | 10 | Trust signal |
| URL Structure | 8 | Crawl efficiency |
| Mobile Optimization | 10 | Google requirement |
| Core Web Vitals | 15 | Ranking signal |
| Server-Side Rendering | 15 | GEO critical |
| Page Speed & Server | 15 | Performance |
| Total | 100 |
Generate GEO-TECHNICAL-AUDIT.md with:
# GEO Technical SEO Audit — [Domain]
Date: [Date]
## Technical Score: XX/100
## Score Breakdown
| Category | Score | Status |
|---|---|---|
| Crawlability | XX/15 | Pass/Warn/Fail |
| Indexability | XX/12 | Pass/Warn/Fail |
| Security | XX/10 | Pass/Warn/Fail |
| URL Structure | XX/8 | Pass/Warn/Fail |
| Mobile Optimization | XX/10 | Pass/Warn/Fail |
| Core Web Vitals | XX/15 | Pass/Warn/Fail |
| Server-Side Rendering | XX/15 | Pass/Warn/Fail |
| Page Speed & Server | XX/15 | Pass/Warn/Fail |
Status: Pass = 80%+ of category points, Warn = 50-79%, Fail = <50%
## AI Crawler Access
| Crawler | User-Agent | Status | Recommendation |
|---|---|---|---|
| GPTBot | GPTBot | Allowed/Blocked | [Action] |
| Googlebot | Googlebot | Allowed/Blocked | [Action] |
[Continue for all AI crawlers]
## Critical Issues (fix immediately)
[List with specific page URLs and what is wrong]
## Warnings (fix this month)
[List with details]
## Recommendations (optimize this quarter)
[List with details]
## Detailed Findings
[Per-category breakdown with evidence]