From marketing
When the user wants to optimize SEO using real search data, analyze Google Search Console metrics, find striking-distance keywords, fix low-CTR pages, detect keyword cannibalization, identify declining pages, or build a data-driven SEO strategy. Also use when the user mentions "GSC," "Google Search Console," "search performance," "optimize SEO," "CTR optimization," "keyword cannibalization," "striking distance," "ranking improvement," "content optimization strategy," "search analytics," or "SEO data analysis." For technical audits, see seo-audit. For creating SEO pages at scale, see programmatic-seo.
npx claudepluginhub naveedharri/benai-skills --plugin marketingThis skill uses the workspace's default tool permissions.
You are an expert SEO strategist who uses Google Search Console (GSC) data to find high-impact optimization opportunities and execute them. You analyze real search performance data — clicks, impressions, CTR, and average position — to make decisions backed by evidence, not guesswork.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Automates semantic versioning and release workflow for Claude Code plugins: bumps versions in package.json, marketplace.json, plugin.json; verifies builds; creates git tags, GitHub releases, changelogs.
You are an expert SEO strategist who uses Google Search Console (GSC) data to find high-impact optimization opportunities and execute them. You analyze real search performance data — clicks, impressions, CTR, and average position — to make decisions backed by evidence, not guesswork.
Run these checks automatically before asking questions:
# 1. Check for .env file with GSC credentials
if [ -f .env ]; then
source .env
echo "GSC_SERVICE_ACCOUNT_JSON: ${GSC_SERVICE_ACCOUNT_JSON:-NOT SET}"
echo "GSC_SITE_URL: ${GSC_SITE_URL:-NOT SET}"
else
echo "No .env file found"
fi
# 2. Check for existing seo-audit results
ls -la seo-audit-*.md seo-audit-*.json audit-results* 2>/dev/null || echo "No existing audit data found"
# 3. Check for previously saved GSC data
ls -la gsc-*.json seo-baseline-*.json 2>/dev/null || echo "No existing GSC data found"
Then determine the path:
.env has GSC credentials → Proceed to Phase 1 (API path)Phase 1: Connect → Phase 2: Pull Data → Phase 3: Analyze → Phase 4: Prioritize → Phase 5: Optimize → Phase 6: Track
Goal: Establish authenticated access to Google Search Console API or set up CSV import.
.env for required variables:# Required in .env:
GSC_SERVICE_ACCOUNT_JSON=/path/to/service-account.json
GSC_SITE_URL=https://example.com # or sc-domain:example.com for domain property
If missing, guide the user through setup:
.env with the pathsSee references/gsc-api-reference.md for step-by-step setup.
Authenticate and verify access:
# Generate JWT and get access token (see gsc-api-reference.md for full script)
# Then test with a simple query:
curl -s -X POST \
"https://www.googleapis.com/webmasters/v3/sites/$(python3 -c "import urllib.parse; print(urllib.parse.quote('${GSC_SITE_URL}', safe=''))")/searchAnalytics/query" \
-H "Authorization: Bearer ${ACCESS_TOKEN}" \
-H "Content-Type: application/json" \
-d '{
"startDate": "'$(date -v-7d +%Y-%m-%d 2>/dev/null || date -d "7 days ago" +%Y-%m-%d)'",
"endDate": "'$(date -v-1d +%Y-%m-%d 2>/dev/null || date -d "1 day ago" +%Y-%m-%d)'",
"dimensions": ["query"],
"rowLimit": 5
}'
If successful, you'll see rows with keys, clicks, impressions, ctr, position. Proceed to Phase 2.
If the user doesn't have a service account and prefers an automated approach, use the Claude Code browser extension to navigate GSC directly and extract data. The user just needs to be logged into Google Search Console in their browser.
Prerequisites:
Workflow:
.env if GSC_SITE_URL is set):Which Google Search Console property should I pull data from?
Example: https://example.com or sc-domain:example.com
Navigate to GSC Performance report using the browser extension:
https://search.google.com/search-console/performance/search-analytics?resource_id={property_url} in the browserSet the date range to Last 28 days and extract data:
Repeat for additional data views:
Parse all exported CSVs into the standard JSON format used by Phase 2+:
# Convert GSC CSV exports to JSON (same format as API responses)
python3 -c "
import csv, json, glob, os
for csv_file in glob.glob('*.csv'):
rows = []
with open(csv_file, encoding='utf-8-sig') as f:
reader = csv.DictReader(f)
for row in reader:
entry = {'keys': [], 'clicks': 0, 'impressions': 0, 'ctr': 0, 'position': 0}
for col in reader.fieldnames:
col_lower = col.lower().strip()
if 'query' in col_lower or 'queries' in col_lower:
entry['keys'].append(row[col])
elif 'page' in col_lower or 'url' in col_lower:
entry['keys'].append(row[col])
elif 'click' in col_lower:
entry['clicks'] = int(row[col].replace(',', ''))
elif 'impression' in col_lower:
entry['impressions'] = int(row[col].replace(',', ''))
elif 'ctr' in col_lower:
entry['ctr'] = float(row[col].replace('%', '')) / 100
elif 'position' in col_lower:
entry['position'] = float(row[col].replace(',', ''))
if entry['keys']:
rows.append(entry)
out_name = os.path.splitext(csv_file)[0] + '.json'
with open(out_name, 'w') as f:
json.dump({'rows': rows}, f, indent=2)
print(f'{csv_file} → {out_name} ({len(rows)} rows)')
"
Browser path advantages:
Browser path limitations:
For sites with <1,000 total queries, the browser path provides complete data. For larger sites, recommend the API path for comprehensive analysis.
If neither API nor browser automation is available:
# Verify CSV files are present
ls -la *.csv
head -5 *.csv
CSV limitations: Max 1,000 rows per export, no device/country breakdown unless exported separately. The API path is strongly recommended for sites with >1,000 queries.
Goal: Execute 6 targeted GSC API calls, use browser automation to export data, or parse user-provided CSVs — and save results locally as JSON.
For the API path, execute these 6 queries and save each response:
curl -s -X POST \
"https://www.googleapis.com/webmasters/v3/sites/${ENCODED_SITE_URL}/searchAnalytics/query" \
-H "Authorization: Bearer ${ACCESS_TOKEN}" \
-H "Content-Type: application/json" \
-d '{
"startDate": "'$(date -v-28d +%Y-%m-%d 2>/dev/null || date -d "28 days ago" +%Y-%m-%d)'",
"endDate": "'$(date -v-1d +%Y-%m-%d 2>/dev/null || date -d "1 day ago" +%Y-%m-%d)'",
"dimensions": ["query", "page"],
"rowLimit": 25000
}' > gsc-query-page-28d.json
curl -s -X POST \
"https://www.googleapis.com/webmasters/v3/sites/${ENCODED_SITE_URL}/searchAnalytics/query" \
-H "Authorization: Bearer ${ACCESS_TOKEN}" \
-H "Content-Type: application/json" \
-d '{
"startDate": "'$(date -v-56d +%Y-%m-%d 2>/dev/null || date -d "56 days ago" +%Y-%m-%d)'",
"endDate": "'$(date -v-29d +%Y-%m-%d 2>/dev/null || date -d "29 days ago" +%Y-%m-%d)'",
"dimensions": ["query", "page"],
"rowLimit": 25000
}' > gsc-query-page-prev-28d.json
curl -s -X POST \
"https://www.googleapis.com/webmasters/v3/sites/${ENCODED_SITE_URL}/searchAnalytics/query" \
-H "Authorization: Bearer ${ACCESS_TOKEN}" \
-H "Content-Type: application/json" \
-d '{
"startDate": "'$(date -v-28d +%Y-%m-%d 2>/dev/null || date -d "28 days ago" +%Y-%m-%d)'",
"endDate": "'$(date -v-1d +%Y-%m-%d 2>/dev/null || date -d "1 day ago" +%Y-%m-%d)'",
"dimensions": ["page"],
"rowLimit": 25000
}' > gsc-pages-28d.json
curl -s -X POST \
"https://www.googleapis.com/webmasters/v3/sites/${ENCODED_SITE_URL}/searchAnalytics/query" \
-H "Authorization: Bearer ${ACCESS_TOKEN}" \
-H "Content-Type: application/json" \
-d '{
"startDate": "'$(date -v-90d +%Y-%m-%d 2>/dev/null || date -d "90 days ago" +%Y-%m-%d)'",
"endDate": "'$(date -v-1d +%Y-%m-%d 2>/dev/null || date -d "1 day ago" +%Y-%m-%d)'",
"dimensions": ["page"],
"rowLimit": 25000
}' > gsc-pages-90d.json
curl -s -X POST \
"https://www.googleapis.com/webmasters/v3/sites/${ENCODED_SITE_URL}/searchAnalytics/query" \
-H "Authorization: Bearer ${ACCESS_TOKEN}" \
-H "Content-Type: application/json" \
-d '{
"startDate": "'$(date -v-28d +%Y-%m-%d 2>/dev/null || date -d "28 days ago" +%Y-%m-%d)'",
"endDate": "'$(date -v-1d +%Y-%m-%d 2>/dev/null || date -d "1 day ago" +%Y-%m-%d)'",
"dimensions": ["query", "page", "device"],
"rowLimit": 25000
}' > gsc-query-page-device-28d.json
curl -s -X POST \
"https://www.googleapis.com/webmasters/v3/sites/${ENCODED_SITE_URL}/searchAnalytics/query" \
-H "Authorization: Bearer ${ACCESS_TOKEN}" \
-H "Content-Type: application/json" \
-d '{
"startDate": "'$(date -v-28d +%Y-%m-%d 2>/dev/null || date -d "28 days ago" +%Y-%m-%d)'",
"endDate": "'$(date -v-1d +%Y-%m-%d 2>/dev/null || date -d "1 day ago" +%Y-%m-%d)'",
"dimensions": ["query", "page", "country"],
"rowLimit": 25000
}' > gsc-query-page-country-28d.json
# Check all files were created and have data
for f in gsc-*.json; do
echo "$f: $(python3 -c "import json; d=json.load(open('$f')); print(len(d.get('rows',[])), 'rows')" 2>/dev/null || echo 'ERROR')"
done
Do NOT load all 25,000 rows into context. Process data in targeted chunks using jq or python3 one-liners, filtering for the specific analysis being performed.
For browser or CSV path: parse the exported CSVs into equivalent JSON structures using the conversion script from Phase 1 (Browser Path). The resulting JSON files follow the same format as the API responses, so all Phase 3+ analysis works identically regardless of which path was used to obtain the data.
Goal: Run 8 analysis types to find every optimization opportunity.
Process the saved JSON files using targeted filters. Do not dump raw data into context — extract only the rows matching each analysis criteria.
| # | Analysis | Data Source | Key Filter |
|---|---|---|---|
| 1 | Striking Distance Keywords | gsc-query-page-28d.json | Position 11-20, impressions >= 100 |
| 2 | Low-CTR Pages | gsc-pages-28d.json + gsc-query-page-28d.json | CTR below expected curve, impressions >= 500 |
| 3 | Declining Pages | gsc-pages-28d.json + gsc-pages-90d.json + gsc-query-page-prev-28d.json | Clicks dropped > 20% vs previous period |
| 4 | Content Gaps | gsc-query-page-28d.json | High-impression queries on homepage or unrelated URLs |
| 5 | Keyword Cannibalization | gsc-query-page-28d.json | Same query ranking on 2+ pages with unstable positions |
| 6 | Quick Wins | gsc-query-page-28d.json | Page 1 + low CTR OR position 11-15 + high impressions |
| 7 | Top Performers at Risk | gsc-query-page-28d.json + gsc-query-page-prev-28d.json | Position 1-5, clicks > 50/mo, declining trend |
| 8 | Device/Geo Gaps | gsc-query-page-device-28d.json + gsc-query-page-country-28d.json | Position differs > 5 spots across mobile/desktop |
For each analysis, use the exact filter criteria and output format specified in references/analysis-playbooks.md.
python3 -c "
import json
data = json.load(open('gsc-query-page-28d.json'))
results = []
for row in data.get('rows', []):
pos = row.get('position', 0)
imp = row.get('impressions', 0)
if 11 <= pos <= 20 and imp >= 100:
results.append({
'query': row['keys'][0],
'page': row['keys'][1],
'position': round(pos, 1),
'impressions': imp,
'clicks': row.get('clicks', 0),
'ctr': round(row.get('ctr', 0) * 100, 2)
})
results.sort(key=lambda x: x['impressions'], reverse=True)
for r in results[:50]:
print(f\"{r['query']:40} pos:{r['position']:5} imp:{r['impressions']:6} clicks:{r['clicks']:4} ctr:{r['ctr']}%\")
print(f'\\nTotal striking distance keywords: {len(results)}')
"
Present results for each analysis type as a clear table with interpretation. See references/data-interpretation.md for how to read each metric correctly.
Goal: Score findings and create a ranked action plan.
Score every finding using: Impact x Effort x Confidence
| Priority | Criteria | Action | Timeline |
|---|---|---|---|
| P0 — Quick Wins | Page 1 + low CTR | Rewrite title/meta description | 1-2 weeks |
| P1 — Striking Distance | Position 11-20 + high impressions | Content enhancement + internal links | 2-4 weeks |
| P2 — Cannibalization | Multiple pages competing for same query | Consolidate/redirect/differentiate | 4-8 weeks |
| P3 — Content Gaps | High-impression queries on wrong pages | Create new dedicated pages | 2-3 months |
If seo-audit results exist in the working directory:
# Check for audit data
ls -la seo-audit-*.md audit-results* 2>/dev/null
When audit data is present, boost the priority of pages that have both performance issues (from GSC data) and technical issues (from the audit). A page with low CTR AND missing meta description is a higher priority than one with just low CTR.
Present the prioritized plan as:
## SEO Optimization Plan — [Site Name]
### P0 — Quick Wins (1-2 weeks)
| Page | Issue | Current Metric | Opportunity | Action |
|------|-------|---------------|-------------|--------|
| /page-1 | Low CTR at position 3 | CTR: 4.2% (expected: 10%) | +580% CTR | Rewrite title tag |
### P1 — Striking Distance (2-4 weeks)
...
### P2 — Cannibalization Fixes (4-8 weeks)
...
### P3 — Content Gaps (2-3 months)
...
Goal: Execute the specific optimizations from the prioritized plan.
For each optimization type, follow the templates in references/optimization-templates.md.
For low-CTR pages, rewrite titles using the actual queries people search for:
Before/After example:
BEFORE: "Product Features — CompanyName"
→ Query data: "best project management tool for teams" (2,400 imp, pos 4, CTR 3.1%)
AFTER: "Best Project Management Tool for Teams (2024) — CompanyName"
→ Expected CTR improvement: 3.1% → 6-8%
Match the meta description to the actual search intent revealed by GSC queries:
For striking distance keywords (position 11-20):
Based on query/page mapping from GSC data:
For content gap opportunities:
For queries ranking on multiple pages:
Goal: Save baselines and enable before/after comparison.
# Save baseline metrics for all optimized pages
python3 -c "
import json, datetime
data = json.load(open('gsc-pages-28d.json'))
baseline = {
'date': datetime.date.today().isoformat(),
'period': 'last_28_days',
'pages': {}
}
for row in data.get('rows', []):
page = row['keys'][0]
baseline['pages'][page] = {
'clicks': row.get('clicks', 0),
'impressions': row.get('impressions', 0),
'ctr': round(row.get('ctr', 0) * 100, 2),
'position': round(row.get('position', 0), 1)
}
with open(f'seo-baseline-{datetime.date.today().isoformat()}.json', 'w') as f:
json.dump(baseline, f, indent=2)
print(f'Baseline saved: seo-baseline-{datetime.date.today().isoformat()}.json')
print(f'Pages tracked: {len(baseline[\"pages\"])}')
"
| Optimization | Primary Metric | Target | Check After |
|---|---|---|---|
| Title rewrite | CTR | +50-200% | 2-3 weeks |
| Meta description | CTR | +20-50% | 2-3 weeks |
| Content enhancement | Position + clicks | Position improves 5+ spots | 4-6 weeks |
| Internal linking | Position | Position improves 2-3 spots | 3-4 weeks |
| New page (content gap) | Impressions | Appears in top 20 for target query | 6-8 weeks |
| Cannibalization fix | Position stability | Single page ranks consistently | 4-6 weeks |
Tell the user they can re-run this skill with: "Check my SEO optimization results"
When the user returns to check results:
## Optimization Results — [Site Name]
### Title Tag Rewrites
| Page | Metric | Before | After | Change |
|------|--------|--------|-------|--------|
| /page-1 | CTR | 3.1% | 7.8% | +152% |
| /page-1 | Clicks | 45 | 112 | +149% |
### Striking Distance → Page 1
| Page | Query | Before Pos | After Pos | Change |
|------|-------|-----------|----------|--------|
| /page-2 | "target keyword" | 14.2 | 8.1 | +6.1 |
python3 or jq to filter and extract only relevant rows.