npx claudepluginhub fortunto2/solo-factory --plugin soloThis skill is limited to using the following tools:
SEO health check for any URL or project landing page. Fetches the page, analyzes meta tags, OG, JSON-LD, sitemap, robots.txt, checks SERP positions for target keywords, and outputs a scored report.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
SEO health check for any URL or project landing page. Fetches the page, analyzes meta tags, OG, JSON-LD, sitemap, robots.txt, checks SERP positions for target keywords, and outputs a scored report.
Dedicated SEO CLI — repo: https://github.com/fortunto2/seo-cli
# Install (one-time)
git clone https://github.com/fortunto2/seo-cli.git ~/startups/shared/seo-cli
cd ~/startups/shared/seo-cli && uv venv && uv pip install -e .
cp config.example.yaml config.yaml # add credentials (GSC, Bing, Yandex, IndexNow)
# Detect seo-cli
python ~/startups/shared/seo-cli/cli.py --help 2>/dev/null
If available, prefer seo-cli commands over manual WebFetch:
python ~/startups/shared/seo-cli/cli.py audit <url> — deep page audit (meta, OG, JSON-LD, keywords, readability)python ~/startups/shared/seo-cli/cli.py status — dashboard for all registered sitespython ~/startups/shared/seo-cli/cli.py analytics <site> — GSC data (queries, impressions, CTR)python ~/startups/shared/seo-cli/cli.py monitor <site> — position tracking with snapshotspython ~/startups/shared/seo-cli/cli.py reindex <url> — instant reindex via Google Indexing API + IndexNowpython ~/startups/shared/seo-cli/cli.py competitors <keyword> — SERP + competitor analysispython ~/startups/shared/seo-cli/cli.py launch <site> — full new site promotion workflowIf seo-cli is not available, fall back to WebFetch-based audit below.
web_search(query, engines, include_raw_content) — SERP position check, competitor analysisproject_info(name) — get project URL if auditing by project nameIf MCP tools are not available, use Claude WebSearch/WebFetch as fallback.
Parse target from $ARGUMENTS.
http): use directly.docs/prd.md.Fetch the page via WebFetch. Extract:
<title> tag (length check: 50-60 chars ideal)<meta name="description"> (length check: 150-160 chars ideal)og:title, og:description, og:image, og:url, og:typetwitter:card, twitter:title, twitter:image<script type="application/ld+json">)<link rel="canonical"> — canonical URL<html lang="..."> — language tag<link rel="alternate" hreflang="..."> — i18n tagsCheck infrastructure files:
{origin}/sitemap.xml — exists? Valid XML? Page count?{origin}/robots.txt — exists? Disallow rules? Sitemap reference?{origin}/favicon.ico — exists?Forced reasoning — assess before scoring: Write out before proceeding:
SERP position check — for 3-5 keywords:
web_search(query="{keyword}") or WebSearch.Score calculation (0-100):
| Check | Max Points | Criteria |
|---|---|---|
| Title tag | 10 | Exists, 50-60 chars, contains primary keyword |
| Meta description | 10 | Exists, 150-160 chars, compelling |
| OG tags | 10 | og:title, og:description, og:image all present |
| JSON-LD | 10 | Valid structured data present |
| Canonical | 5 | Present and correct |
| Sitemap | 10 | Exists, valid, referenced in robots.txt |
| Robots.txt | 5 | Exists, no overly broad Disallow |
| H1 structure | 5 | Exactly one H1, descriptive |
| HTTPS | 5 | Site uses HTTPS |
| Mobile meta | 5 | Viewport tag present |
| Language | 5 | lang attribute on <html> |
| Favicon | 5 | Exists |
| SERP presence | 15 | Found in top 10 for target keywords |
Write report to docs/seo-audit.md (in project context) or print to console:
# SEO Audit: {URL}
**Date:** {YYYY-MM-DD}
**Score:** {N}/100
## Summary
{2-3 sentence overview of SEO health}
## Checks
| Check | Status | Score | Details |
|-------|--------|-------|---------|
| Title | pass/fail | X/10 | "{actual title}" (N chars) |
| ... | ... | ... | ... |
## SERP Positions
| Keyword | Position | Top Competitors |
|---------|----------|----------------|
| {kw} | #N or N/A | competitor1, competitor2, competitor3 |
## Critical Issues
- {issue with fix recommendation}
## Recommendations (Top 3)
1. {highest impact fix}
2. {second priority}
3. {third priority}
Output summary — print score and top 3 recommendations.
<div id="root">. Check view-source: to verify HTML contains actual content.Sitemap: directive. Without it, indexing depends on internal link crawling which is slow for new sites.npx lighthouse {url} --output=json if lighthouse is available.Cause: URL is behind authentication, CORS, or returns non-HTML. Fix: Ensure the URL is publicly accessible. For SPAs, check if content is server-rendered.
Cause: Site is new or not indexed by search engines. Fix: This is expected for new sites. Submit sitemap to Google Search Console and re-audit in 2-4 weeks.
Cause: Missing infrastructure files (sitemap.xml, robots.txt, JSON-LD). Fix: These are the highest-impact fixes. Generate sitemap, add robots.txt with sitemap reference, and add JSON-LD structured data.