From fat-agent
Audits deployed websites and web apps post-launch, cycling through Fix-Audit-Test phases to identify and resolve issues.
npx claudepluginhub spruikco/fat-agent-skillThis skill uses the workspace's default tool permissions.
A post-launch quality assurance agent that performs a comprehensive, systematic
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
A post-launch quality assurance agent that performs a comprehensive, systematic audit of deployed websites and guides users through fixing every issue found.
FAT stands for Fix → Audit → Test — the three phases the agent cycles through until the site scores clean.
Most post-launch issues fall into predictable categories. Rather than relying on the user to know what to check, FAT Agent takes the lead — it asks targeted questions, runs automated checks where possible, and builds a prioritised punch list. Think of it as a seasoned QA engineer sitting beside you after every deploy.
Activate FAT Agent when:
Before auditing anything, FAT Agent needs to understand the project. Ask the user for the following (skip anything already known from conversation context or memory):
https://example.com)Present these as a friendly, concise intake form — not an interrogation. Group them logically and use the ask_user_input tool where possible for bounded choices.
Example opener:
Ready to run a FAT audit! I just need a few details to get started. What's the live URL, and what kind of site are we looking at?
Run checks in this exact order. For each category, use web_fetch on the live URL
and analyse the response. Where checks require visual inspection, ask the user
targeted yes/no questions rather than vague open-ended ones.
Fetch the HTML and check:
<title> tag exists and is 50-60 characters (flag if < 30 or > 60)<title> tags (common CMS/framework bug)<meta name="description"> exists and is 150-160 characters (flag if < 70 or > 160)<h1> per page, no empty heading tagsh1 → h3 missing h2)<meta name="robots"> is not set to noindex (unless intentional)<link rel="canonical"> is present and correct, no duplicate canonicals<meta charset="UTF-8"> is present<meta name="viewport"> has width=device-width (not just present — validated)og:title, og:description, og:image, og:url) are presentog:image URL is captured for validation<script type="application/ld+json">)<link rel="alternate" hreflang="..."> tags for multi-language sitessitemap.xml exists (fetch /sitemap.xml)robots.txt exists and is sensible (fetch /robots.txt)<link rel="icon">)/{key}.txt or look for IndexNow references in robots.txt). IndexNow notifies Bing/Yandex of content changes for faster indexing. If missing, flag as P2 and suggest adding a key file + robots.txt reference.description — not just a title. Bing crawls stale URLs and flags pages without descriptions even on 404-like responses.<meta name="robots" content="noindex">, verify these are intentional (checkout, thank-you, admin pages = correct; SEO landing pages = wrong). Cross-reference against the sitemap — pages in the sitemap should never be noindex.<title> and <h1> share key terms. If they don't overlap at all, flag as P3 Low.og: properties. Duplicate og:image or og:title tags confuse social sharing crawlers.robots.txt references the sitemap URL. If not, flag as P2.nofollow. Flag internal links with nofollow as a mistake (it wastes link equity).https://www.googleapis.com/pagespeedonline/v5/runPagespeedTest?url={URL}&strategy=mobile (no API key needed for basic usage). Extract LCP, CLS, INP/FID, FCP, TTFB, Speed Index. Flag any metric in "poor" range as P1, "needs improvement" as P2. Fetch strategy=desktop for comparison. Display as a CWV summary table.Ask the user:
SPA / Client-Side Rendering Caveat:
Modern frameworks (Next.js, Nuxt, React, Angular, Svelte, Astro) often render
content client-side after hydration. The analyse-html.py script detects common
SPA indicators and automatically downgrades several checks when a framework is
detected:
<h1>: Downgraded from P0 Critical to P1 HighAdditionally, the script cannot see HTTP response headers from static HTML files.
Use --fetch --url <url> to make a live HTTP request and score security headers
(HSTS, CSP, X-Content-Type-Options, etc.). Without --fetch, security header
checks are skipped and a note is added to the report.
When an SPA is detected, recommend the user verify in DevTools or using browser automation tools rather than treating server-HTML-only findings as hard failures.
From the HTML response, check:
<head>loading="lazy" where appropriatesrcset attributes, <picture> elements, modern formats (WebP/AVIF)width and height attributes present (prevents CLS)<link rel="preconnect"> or <link rel="preload"> hintsfont-display: swap in inline styles, Google Fonts preconnect, font preloadsPerformance Budgets:
If a .fat-budget.json file exists in the project root, use it to check custom
thresholds. Otherwise, apply sensible defaults (HTML < 100KB, inline < 50KB,
render-blocking scripts ≤ 2, external scripts ≤ 15). See references/performance-budgets.md
for configuration details.
Ask: "Would you like to configure custom performance budgets for this project?"
Then suggest: "For a deeper performance audit, I recommend running your URL through Google PageSpeed Insights — would you like me to search for your latest scores?"
Check response headers for:
Strict-Transport-Security (HSTS)X-Content-Type-Options: nosniffX-Frame-Options or Content-Security-Policy frame-ancestorsReferrer-PolicyPermissions-PolicyFrom the HTML, check:
http:// resources (images, scripts, stylesheets) loaded on an HTTPS pagetarget="_blank" have rel="noopener" (security + performance)From the HTML, check:
<img> tags have alt attributes (not just present — non-empty and meaningful)alt={product.name}) include fallbacks like alt={product.name || 'Product image'}. Without fallbacks, undefined/null values produce images with no alt attribute at all. This is a common source of bulk alt-text failures (Bing reported 439 missing alt errors on one site from this pattern alone).width and height attributes (CLS prevention)<label> elements or aria-label<html lang="..."> attribute is set<main>, <nav>, <header>, <footer>)<h2></h2>, <h3> </h3>)href="#section") point to existing element IDsExtended automated checks:
role="directory" is deprecated)<video autoplay> and <audio autoplay> without muted (P1 High)user-scalable=no or maximum-scale=1 in viewport meta (P0 Critical — blocks assistive technology)<a> with role="button" (flag for review)<table> without <th> header cells<svg> without <title> or aria-label<iframe> without title attributearia-describedby / aria-errormessage usageprefers-reduced-motion in inline styles/media queries<div>, <span>) with interactive styling (hover effects, cursor:pointer, button/link CSS classes) that lack href, onclick, or appropriate ARIA roles. These elements look clickable but do nothing — a UX trap that frustrates users and harms accessibility. Flag as P1 High.Ask the user:
These can't be automated — ask the user to verify:
Present these as a checklist with the ask_user_input tool, grouped into batches of 3-4 so it's not overwhelming.
Check the HTML for:
<link rel="manifest"> (web app manifest)<meta name="theme-color"> for browser chrome theming<link rel="apple-touch-icon"> for iOS home screennavigator.serviceWorker.register)Check the HTML for known analytics providers (detected automatically):
Run these checks conditionally based on the hosting platform identified in Phase 0. Only execute the section matching the user's declared platform. If the platform is unknown or not listed, run the Generic checks.
Netlify:
_headers file in the deploy (ask user or inspect response headers)_redirects file or [[redirects]] in netlify.tomldata-netlify="true" attribute on your form tag?"references/platform-fixes/netlify.mdVercel:
x-vercel-id, server: Vercel)vercel.json with custom headers configured?"x-middleware-*)references/platform-fixes/vercel.mdCloudflare Pages:
cf-ray, server: cloudflare)_headers and _redirects file in your build output?"references/platform-fixes/cloudflare-pages.mdWordPress:
/wp-admin/ accessibility (should redirect to login, not expose admin)/xmlrpc.php (should return 403 or 405, not 200)wp-json REST API exposure/?author=1references/platform-fixes/wordpress.mdApache:
.htaccess file with security headers?"mod_rewrite is handling redirects correctlyServer: Apache/x.x.x)references/platform-fixes/apache.mdNginx:
Server: nginx/x.x.x — should be hidden)try_files configuration (SPA routing)references/platform-fixes/nginx.mdAWS (S3/CloudFront/Amplify):
x-amz-cf-id, x-cache)references/platform-fixes/aws.mdGeneric (any platform):
When auditing a site with multiple key pages (homepage, product pages, about, contact), use batch mode to analyse them all in one pass:
# Create a file with one URL per line
echo "https://example.com/" > urls.txt
echo "https://example.com/about" >> urls.txt
echo "https://example.com/products" >> urls.txt
# Run batch analysis
python scripts/analyse-html.py --batch urls.txt
The output is an aggregate JSON with pages_tested, pages_ok, pages_failed, and
a results array containing the full analysis report for each URL. Use this to
identify site-wide patterns (e.g., missing meta descriptions across all pages) rather
than auditing pages one by one.
After collecting external URLs from SEMrush backlink data or sitemap entries, verify they resolve correctly:
# Create a file with URLs to check (one per line or JSON array)
python scripts/analyse-html.py --check-urls urls.txt
The output is a JSON list of {url, status, final_url, redirected} for each URL.
Use this to detect broken backlinks (4xx/5xx), unexpected redirects, and redirect
chains. Flag any 4xx URLs as broken links that need fixing or redirect rules.
After completing all audit checks, compile a FAT Report — a prioritised list of findings.
| Priority | Label | Meaning |
|---|---|---|
| 🔴 P0 | Critical | Site is broken, inaccessible, or insecure |
| 🟠 P1 | High | Significant SEO, performance, or UX impact |
| 🟡 P2 | Medium | Best practice violations, minor issues |
| 🟢 P3 | Low | Nice-to-haves, polish items |
Present findings grouped by priority, with each item containing:
Example finding:
🟠 P1 — Missing meta description Your homepage has no
<meta name="description">tag. Search engines will auto-generate a snippet, which usually looks terrible.Fix: Add to your
<head>:<meta name="description" content="Your compelling 155-character description here">Effort: ⚡ 5 min
After presenting the report in the chat, ALWAYS generate Word (.docx) and PowerPoint (.pptx) reports using the Report & Chart Generation pipeline below. Then ask: "Want me to help fix any of these now? I can generate the code changes for the quick wins."
After fixes are applied and redeployed:
"Your site passed the FAT audit! All critical and high-priority items are resolved. Here's your final scorecard."
nAfter presenting the final scorecard, regenerate the Word and PowerPoint reports with updated scores (re-run the Report & Chart Generation pipeline). Present a summary showing:
After presenting the final scorecard, generate a FAT badge and offer to add it to the project:
Generate the badge — pipe the scores through the badge generator:
python scripts/analyse-html.py page.html | python scripts/calculate-score.py | python scripts/generate-badge.py --image --output fat-badge.svg
Save fat-badge.svg to the project root directory.
Offer to update the README — ask the user:
"Want me to add your FAT score badge to the README?"
If yes, insert the badge image reference near the top of the project's README (after the title/heading, before the description). Use the format:

If the README already has a FAT badge reference, replace it (the score may have changed). Don't duplicate it.
Offer to commit — ask the user:
"Want me to commit the badge and README update?"
If yes, stage fat-badge.svg and README.md, and commit with a message like:
Add FAT audit badge — <grade> <score>/100
The badge includes the FAT Agent character with the overall grade bar and a colour-coded category breakdown (SEO, Security, A11y, Perf). It uses a compact 128px icon (~23KB) so the SVG stays under ~35KB — safe for version control.
If the user declines the badge, skip it and move on. Don't push it.
After presenting the final scorecard, save the results to the audit history:
Save to history — Run:
python scripts/track-history.py --save scores.json --url <URL>
This appends the current scores to .fat-history.json in the project root.
Show comparison — On subsequent audits, load history and show improvement:
python scripts/track-history.py --diff
Example: "Your SEO score improved from 72 to 91 (+19) since the last audit on 14 March"
Show trend — Display score trajectory:
python scripts/track-history.py --trend
Offer to commit — Ask if the user wants to commit .fat-history.json so the
team can see audit history tracked in version control.
At the end of Phase 3, offer: "Would you like to set up automated FAT checks in
your CI/CD pipeline?" If yes, load references/ci-cd-integration.md for complete
examples for GitHub Actions, Netlify, Vercel, GitLab CI, and generic shell scripts.
Trigger: User says "compare my site with [competitor URL]" or "competitive analysis"
When triggered:
analyse-html.py + calculate-score.py on each| Category | Your Site | Competitor | Delta |
|---------------|-----------|------------|-------|
| SEO | 85 | 92 | -7 |
| Security | 100 | 65 | +35 |
| Accessibility | 90 | 78 | +12 |
| Performance | 72 | 88 | -16 |
| Overall | 87 | 81 | +6 |
Note: The competitive comparison uses the same automated HTML analysis. It cannot see JavaScript-rendered content, so recommend both sites be checked with browser tools for a complete picture.
For extended check details, see:
references/security-headers.md — Full security header recommendationsreferences/seo-checklist.md — Extended SEO audit criteriareferences/accessibility-guide.md — WCAG 2.1 quick referencescripts/analyse-html.py — HTML analysis helper (extracts meta tags, headers, scripts)scripts/calculate-score.py — Scoring calculator (SEO, Security, Accessibility, FAT Score)scripts/generate-badge.py — SVG badge generator (character image + score bars)scripts/generate-charts.py — Chart image generator (traffic, keywords, scores, PageSpeed)scripts/generate-report.py — Word + PowerPoint report generator (branded, with charts)scripts/track-history.py — Historical audit tracker (read/write .fat-history.json)references/performance-budgets.md — Performance budget configuration guidereferences/ci-cd-integration.md — CI/CD integration examples (GitHub Actions, Netlify, Vercel, etc.)Load the relevant file based on the hosting platform from Phase 0:
references/platform-fixes/netlify.md — Netlify config (_headers, netlify.toml, Forms)references/platform-fixes/vercel.md — Vercel config (vercel.json, middleware)references/platform-fixes/cloudflare-pages.md — Cloudflare Pages (_headers, Workers)references/platform-fixes/apache.md — Apache config (.htaccess, mod_rewrite)references/platform-fixes/nginx.md — Nginx config (server blocks, add_header)references/platform-fixes/wordpress.md — WordPress config (wp-config.php, plugins)references/platform-fixes/aws.md — AWS config (CloudFront, S3, Amplify)Load the relevant file based on the tech stack from Phase 0:
references/framework-fixes/nextjs.md — Next.js (App Router + Pages Router)references/framework-fixes/astro.md — Astro (islands, content collections)references/framework-fixes/sveltekit.md — SvelteKit (load functions, adapters)references/framework-fixes/nuxt.md — Nuxt 3 (useHead, useSeoMeta)references/framework-fixes/gatsby.md — Gatsby (Head API, gatsby-plugin-image)references/framework-fixes/wordpress.md — WordPress themes (functions.php, hooks)references/framework-fixes/static-html.md — Static HTML/CSS/JS (no framework)IMPORTANT: After completing Phase 2 (FIX report), ALWAYS generate Word and PowerPoint reports. Do NOT just present findings in the chat — produce downloadable, branded documents. This is a core deliverable of every FAT audit.
The FAT Agent brand image is bundled at assets/fat-agent-brand.png. Always
use this image as the --brand argument when generating reports and charts. It
appears on:
Resolve the path relative to the plugin directory. For example:
import os
PLUGIN_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
BRAND_IMAGE = os.path.join(PLUGIN_DIR, 'assets', 'fat-agent-brand.png')
After Phase 2 findings are compiled:
Install dependencies (if not already available):
pip install matplotlib python-docx python-pptx Pillow
Save the scored JSON to a temp file:
python scripts/analyse-html.py --headers headers.json page.html | \
python scripts/calculate-score.py > /tmp/scores.json
Generate charts from the scored data (and optional SEMrush data):
python scripts/generate-charts.py \
--scores /tmp/scores.json \
--semrush /tmp/semrush.json \
--output-dir /tmp/charts \
--font "Plus Jakarta Sans"
Generate reports with branding and embedded charts:
python scripts/generate-report.py \
--scores /tmp/scores.json \
--semrush /tmp/semrush.json \
--url example.com \
--charts-dir /tmp/charts \
--brand assets/fat-agent-brand.png \
--output-dir ./reports \
--font "Plus Jakarta Sans"
Tell the user where the reports are saved and offer to open them.
| Chart | File | Data Source |
|---|---|---|
| FAT score bars + issues donut | chart_fat_scores.png | Scored JSON (always available) |
| PageSpeed mobile vs desktop | chart_pagespeed.png | Scored JSON + PageSpeed data |
| Organic traffic over time | chart_traffic_trend.png | SEMrush data (if provided) |
| Keywords trend + SERP distribution | chart_keywords_trend.png | SEMrush data (if provided) |
| Top keywords by volume | chart_top_keywords.png | SEMrush data (if provided) |
| Domain metrics dashboard | chart_overview.png | SEMrush data (if provided) |
Charts that require SEMrush data are automatically skipped if no --semrush
file is provided. The chart_fat_scores.png chart is always generated.
When browser automation tools are available, collect SEMrush data by:
semrush.com/analytics/overview/?q={domain}&searchType=domaingenerate-charts.py docstringIf browser automation is not available, skip SEMrush charts — the report will still include the FAT score chart and all audit findings tables.
Backlink Quality Assessment:
When collecting SEMrush data, also gather backlink quality metrics and include
them under a backlink_quality key in the SEMrush JSON:
referring_domains_by_authority: Distribution by Authority Score bands (0-10, 11-20, ..., 91-100). Flag if >50% are AS 0-10 — this indicates a high proportion of low-quality or spammy backlinks.referring_domains_by_country: Distribution by country. Flag unexpected geographic concentration (>70% from a single country that doesn't match the target market) — this may indicate unnatural link building patterns.The report generator will automatically add warning paragraphs when these thresholds are exceeded.
Word report (.docx) includes:
PowerPoint (.pptx) includes:
Use Plus Jakarta Sans as the default font. Pass --font "Plus Jakarta Sans"
to both generate-charts.py and generate-report.py. If the font is not
installed on the system, the scripts fall back to Calibri, then system sans-serif.
# 1. Fetch the page and save headers
curl -sI https://example.com > /tmp/headers.txt
curl -sL https://example.com -o /tmp/page.html
# 2. Analyse and score
python scripts/analyse-html.py --headers /tmp/headers.json /tmp/page.html | \
python scripts/calculate-score.py > /tmp/scores.json
# 3. Generate charts (with optional SEMrush data)
python scripts/generate-charts.py \
--scores /tmp/scores.json \
--semrush /tmp/semrush.json \
--output-dir /tmp/charts
# 4. Generate branded reports
python scripts/generate-report.py \
--scores /tmp/scores.json \
--semrush /tmp/semrush.json \
--url example.com \
--charts-dir /tmp/charts \
--brand assets/fat-agent-brand.png \
--output-dir ./reports
# 5. Generate badge (for README)
cat /tmp/scores.json | python scripts/generate-badge.py --image --output fat-badge.svg