From claude-seo
Captures baselines of SEO-critical page elements including titles, meta descriptions, headings, schema, Open Graph tags, Core Web Vitals; detects changes, tracks regressions, and shows history for URLs.
npx claudepluginhub agricidaniel/claude-seo --plugin claude-seoThis skill uses the workspace's default tool permissions.
Git for your SEO. Capture baselines, detect regressions, track changes over time.
Guides strict Test-Driven Development (TDD): write failing tests first for features, bugfixes, refactors before any production code. Enforces red-green-refactor cycle.
Guides systematic root cause investigation for bugs, test failures, unexpected behavior, performance issues, and build failures before proposing fixes.
Guides A/B test setup with mandatory gates for hypothesis validation, metrics definition, sample size calculation, and execution readiness checks.
Git for your SEO. Capture baselines, detect regressions, track changes over time.
| Command | Purpose |
|---|---|
/seo drift baseline <url> | Capture current SEO state as a "known good" snapshot |
/seo drift compare <url> | Compare current page state to stored baseline |
/seo drift history <url> | Show change history and past comparisons |
Every baseline records these SEO-critical elements:
| Element | Field | Source |
|---|---|---|
| Title tag | title | parse_html.py |
| Meta description | meta_description | parse_html.py |
| Canonical URL | canonical | parse_html.py |
| Robots directives | meta_robots | parse_html.py |
| H1 headings | h1 (array) | parse_html.py |
| H2 headings | h2 (array) | parse_html.py |
| H3 headings | h3 (array) | parse_html.py |
| JSON-LD schema | schema (array) | parse_html.py |
| Open Graph tags | open_graph (dict) | parse_html.py |
| Core Web Vitals | cwv (dict) | pagespeed_check.py |
| HTTP status code | status_code | fetch_page.py |
| HTML content hash | html_hash (SHA-256) | Computed |
| Schema content hash | schema_hash (SHA-256) | Computed |
The comparison engine applies 17 rules across 3 severity levels. Load
references/comparison-rules.md for the full rule set with thresholds,
recommended actions, and cross-skill references.
| Level | Meaning | Response Time |
|---|---|---|
| CRITICAL | SEO-breaking change, likely traffic loss | Immediate |
| WARNING | Potential impact, needs investigation | Within 1 week |
| INFO | Awareness only, may be intentional | Review at convenience |
All data is stored locally in SQLite:
~/.cache/claude-seo/drift/baselines.db
URL normalization ensures consistent matching: lowercase scheme/host, strip default ports (80/443), sort query parameters, remove UTM parameters, strip trailing slashes.
baselineCaptures the current state of a page and stores it.
Steps:
google_auth.validate_url())scripts/fetch_page.pyscripts/parse_html.pyscripts/pagespeed_check.py (use --skip-cwv to skip)Execution:
python scripts/drift_baseline.py <url>
python scripts/drift_baseline.py <url> --skip-cwv
Output: JSON with baseline ID, timestamp, URL, and summary of captured elements.
compareFetches the current page state and diffs it against the most recent baseline.
Steps:
--baseline-id)Execution:
python scripts/drift_compare.py <url>
python scripts/drift_compare.py <url> --baseline-id 5
python scripts/drift_compare.py <url> --skip-cwv
Output: JSON with all triggered rules, old/new values, severity, and actions.
After comparison, offer to generate an HTML report:
python scripts/drift_report.py <comparison_json_file> --output drift-report.html
historyShows all baselines and comparisons for a URL.
Execution:
python scripts/drift_history.py <url>
python scripts/drift_history.py <url> --limit 10
Output: JSON array of baselines (newest first) with timestamps and comparison summaries.
When drift is detected, recommend the appropriate specialized skill:
| Finding | Recommendation |
|---|---|
| Schema removed or modified | Run /seo schema <url> for full validation |
| CWV regression | Run /seo technical <url> for performance audit |
| Title or meta description changed | Run /seo page <url> for content analysis |
| Canonical changed or removed | Run /seo technical <url> for indexability check |
| Noindex added | Run /seo technical <url> for crawlability audit |
| H1/heading structure changed | Run /seo content <url> for E-E-A-T review |
| OG tags removed | Run /seo page <url> for social sharing analysis |
| Status code changed to error | Run /seo technical <url> for full diagnostics |
| Scenario | Action |
|---|---|
| URL unreachable | Report error from fetch_page.py. Do not guess state. Suggest user verify URL. |
| No baseline exists for URL | Inform user and suggest running baseline first. |
| SSRF blocked (private IP) | Report validate_url() rejection. Never bypass. |
| SQLite database missing | Auto-create on first use. No error. |
| CWV fetch fails (no API key) | Store null for CWV fields. Skip CWV rules during comparison. |
| Page returns 4xx/5xx | Still capture as baseline (status code IS a tracked field). |
| Multiple baselines exist | Use most recent unless --baseline-id specified. |
scripts/fetch_page.py which enforces SSRF protection
(blocks private IPs, loopback, reserved ranges, GCP metadata endpoints)?), never string interpolationverify=False anywhere in the pipeline/seo drift baseline https://example.com # Before deploy
# ... deploy happens ...
/seo drift compare https://example.com # After deploy
/seo drift baseline https://example.com # Initial capture
# ... weeks later ...
/seo drift compare https://example.com # Check for drift
/seo drift history https://example.com # Review all changes
/seo drift compare https://example.com # What changed?
/seo drift history https://example.com # When did it change?