From toprank
Scores and diagnoses Google Ads landing pages on 5 weighted dimensions for relevance, speed, messaging match, and conversion potential. Use for audits, Quality Score LPX improvement, or high-CTR low-CVR ad groups.
npx claudepluginhub nowork-studio/toprank --plugin toprankThis skill uses the workspace's default tool permissions.
Read and follow `../shared/preamble.md` — it handles MCP detection, token, and account selection. If config is already cached, this is instant.
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Read and follow ../shared/preamble.md — it handles MCP detection, token, and account selection. If config is already cached, this is instant.
Google Ads campaigns fail on the landing page more often than in the auction. A great RSA that sends traffic to a slow, unfocused, or mismatched page burns budget twice — once on the click, once on the lost conversion. This skill scores landing pages on 5 weighted dimensions and emits concrete fixes.
| Trigger | Source |
|---|---|
| User explicitly asks for a landing page audit | Direct invocation |
/ads-audit Layer 4 finds ad groups with CTR > account avg AND CVR < 50% of account avg | Auto-handoff |
| Quality Score diagnosis flags "Landing Page Experience: Below Average" on high-spend keywords | /ads routes here |
/ads-copy is about to write copy for a page the user hasn't validated | Preflight check |
Only score pages that actually run ad traffic. Don't score random marketing pages.
references/scoring-rubric.md — The 5-dimension weighted rubric, thresholds, and evidence fields (read this before running any score)../ads-audit/references/business-context.md — Uses {data_dir}/business-context.json for brand voice and differentiators to check message match../ads/references/quality-score-framework.md — Read only if the user's goal is QS improvement specificallyFigure out which URLs to score. In priority order:
listAds for that ad group and extract unique final URLs. Normalize (strip tracking params, preserve path + query that affects routing)./ads-audit — the handoff passes the specific ad groups flagged in Layer 4. Pull their final URLs.listAds for the whole account, rank final URLs by spend (last 30 days), and propose the top 3 to score. Ask the user to confirm.De-duplicate aggressively. Many ads point to the same final URL — score each unique URL once, then map back to all the ad groups that use it.
Run all of these in a single tool-use turn:
https://www.googleapis.com/pagespeedonline/v5/runPagespeed?url={url}&strategy=mobile&category=performance&category=accessibility&category=best-practices&category=seo via WebFetch. No API key needed for single-URL queries. Extract LCP, CLS, INP, TTI, performance score, and the top 3 opportunities (lighthouseResult.audits).listAds. This is the message-match baseline.{data_dir}/business-context.json — for brand voice, differentiators, offers, target audience. If it's missing, point the user to /ads-audit first. Don't guess the business.listAds + search term report for the ad groups pointing here. Used to calculate CVR and to ground dollar-impact estimates.If any single call fails, continue — note the gap in the report rather than blocking. PageSpeed Insights can rate-limit; if it does, fall back to manual timing annotation ("PSI unavailable — could not score Page Speed") and deflate the final grade's confidence rather than skipping the dimension.
Read references/scoring-rubric.md and score each dimension 0-100 with evidence. The dimension scores are real measurements (PageSpeed Insights numbers, word-for-word copy comparison, form field counts, etc.) — they're not artificial ratings, they're observations.
Compute the weighted composite only as an internal reference number for the dollar-lift formula below. Do not surface it as a letter grade. The user sees the dimension-level measurements and the estimated dollar lift — the composite is plumbing.
internal_composite = 0.25 * Message Match
+ 0.25 * Page Speed
+ 0.20 * Mobile Experience
+ 0.15 * Trust Signals
+ 0.15 * Form & CTA
Dollar lift is the headline. If business-context.json.unit_economics has aov_usd + profit_margin, compute the estimated monthly lift from raising the composite by 15 points (see ../shared/ppc-math.md):
Target lift = min(+15, 90 - internal_composite) # cap at 90 internal
Assumed CVR lift = target_lift / 100 * 0.5 # cap at 50% relative lift
Current conversions = ad group conversions from last 30d
Additional conversions = current_conversions * assumed_CVR_lift
Additional revenue = additional_conversions * AOV
Additional profit = additional_conversions * AOV * profit_margin
Present the lift as fixing this page is worth ~$X/mo in profit — never as a guarantee. The 50% cap on CVR lift and the 15-point cap on score improvement keep estimates out of fantasy territory. If unit_economics isn't available, skip the dollar line entirely rather than making up a number — the dimension measurements still stand on their own.
Max 60 lines. Lead with the dollar lift (when available) and the single biggest fix. No letter grade.
# Landing Page — [URL]
Ads sending traffic here: [N ad groups] · [X clicks/mo] · [$Y spent/mo] · CVR [Z%]
[If unit_economics available] **Estimated lift from top 3 fixes: ~$X/mo in profit**
[If unit_economics is missing] _(Dollar lift unavailable — no verified AOV/margin. Confirm unit economics in business-context.json for sharper estimates.)_
**Biggest leak:** [one sentence naming the dimension and the specific observation, e.g. "LCP is 5.8s on mobile — 2.8s slower than the 3s threshold that kills conversion rate."]
## Measurements
| Dimension | Measurement | Top Finding |
|-----------|-------------|-------------|
| Message Match | [word-for-word verdict: Match / Drift / Broken] | [one line citing ad H1 vs page H1] |
| Page Speed | LCP Xs · INP Xms · CLS X · PSI perf score X | [top blocking audit from Lighthouse] |
| Mobile Experience | PSI accessibility X · [mobile-specific issue count] | [one line: e.g. "No click-to-call, form below fold"] |
| Trust Signals | [review count, years in business, cert count] | [one line: e.g. "Zero named testimonials, copyright 2023"] |
| Form & CTA | [field count] fields · CTA text: "[button]" · [above/below fold] | [one line: e.g. "11 fields for a free quote"] |
## Fix First (top 3, ranked by estimated $ lift)
1. **[Action]** — est. +$X/mo · `<time_to_fix>`
Evidence: [the actual text/number from the page or PSI audit]
2. **[Action]** — est. +$X/mo · `<time_to_fix>`
Evidence: [...]
3. **[Action]** — est. +$X/mo · `<time_to_fix>`
Evidence: [...]
## Message Match Detail
Ad headline: "[actual headline from top-spending ad]"
Page H1: "[actual H1 from landing page]"
Observation: [Match / Drift / Broken] — [one-line rationale citing the specific words that match or don't]
## Handoff
[Pick one:]
- Page speed dominates the problem → "Share these fixes with your developer: [list]"
- Message mismatch dominates → "Run /ads-copy to rewrite ads to match the page, or update the page to match the ads"
- Form friction dominates → "Reduce form to [specific fields]. Every removed field is ~10% more conversions"
Append the score to {data_dir}/landing-page-history.json so re-audits can show deltas:
{
"pages": {
"https://example.com/services/roofing": {
"history": [
{
"date": "2026-04-14",
"internal_composite": 67,
"dimensions": {
"message_match": 72,
"page_speed": 45,
"mobile": 80,
"trust": 70,
"form_cta": 65
},
"psi_mobile_lcp_s": 4.2,
"psi_mobile_cls": 0.15,
"psi_mobile_inp_ms": 320,
"estimated_lift_usd_per_month": 380,
"ad_groups": ["Tukwila Search - Roofing"],
"monthly_spend": 1240.50,
"monthly_cvr": 2.1,
"biggest_leak": "Page Speed — LCP 4.2s on mobile"
}
]
}
}
}
internal_composite is stored for trend tracking only — it's the internal reference number used by the dollar-lift formula, never shown to the user as a letter grade. On subsequent runs against the same URL, diff the raw dimension measurements and the dollar lift: LCP 4.2s → 2.1s · Page Speed 45 → 78 · estimated lift $380/mo → $120/mo remaining. Three measurements moved, no artificial grade flip.
/ads-copy for new headlines or /ads for bid/negative/budget moves.unit_economics.source == "inferred_from_template", append _(using industry defaults — confirm your AOV/margin for sharper estimates)_ to the lift line.landing-page-history.json, even if the user doesn't ask — future audits depend on the baseline.