From toprank
Audits Google Ads accounts for health issues, wasted spend, impression share, and other dimensions while setting up reusable business context JSON for other skills.
npx claudepluginhub nowork-studio/toprank --plugin toprankThis skill uses the workspace's default tool permissions.
Read and follow `../shared/preamble.md` — it handles MCP detection, token, and account selection. If config is already cached, this is instant.
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Read and follow ../shared/preamble.md — it handles MCP detection, token, and account selection. If config is already cached, this is instant.
This is the starting point for any Google Ads account. It does two things:
{data_dir}/business-context.json so every other ads skill (copy, landing pages, competitive analysis) can use it without re-askingRun this before anything else. If another ads skill finds business-context.json missing, it should point the user here.
The user may pass arguments that narrow the audit to specific campaigns, services, or focus areas. Parse the arguments before starting data collection.
| User says | Scope | Behavior |
|---|---|---|
| No arguments / "audit my ads" | Full account | Audit all campaigns, all dimensions |
| "focus on grooming" / "grooming campaigns" | Service-scoped | Filter to campaigns matching the service keyword. Still pull account-level data for context (conversion tracking, account settings), but deep-dive analysis, scoring, and recommendations focus on the matched campaigns only |
| "campaign X" / specific campaign name | Campaign-scoped | Same as service-scoped but matched to exact campaign(s) |
| "just check wasted spend" / "impression share" | Dimension-scoped | Full data pull but report only the requested dimension(s) in depth. Scorecard still shows all 7 dimensions for context, but detailed findings and actions focus on the requested area |
audit call returns all campaigns — you need the full picture anyway to calculate account-wide metrics like total spend (the denominator for waste percentages)campaigns[] and per-item findings.* arrays by the scope-matched campaign names/IDs before scoring. No extra API calls required.business-context.json captures the whole business, not just the scoped segment. Don't narrow business context to the scopeMatch campaign names, ad group names, and keyword themes using case-insensitive substring matching. For example, "grooming" matches campaigns named "Tukwila Grooming Search", "Grooming Test", etc. If no campaigns match, tell the user what campaigns exist and ask them to clarify.
Read these reference documents during analysis for expert-level context:
references/account-health-scoring.md — Detailed scoring rubrics for each dimension (0-5 scale with specific criteria)../ads/references/industry-benchmarks.md — Compare account metrics to industry averages../ads/references/quality-score-framework.md — QS diagnostics and component-level analysis../ads/references/search-term-analysis-guide.md — Search term relevance scoring methodology../ads/references/campaign-structure-guide.md — Account structure best practicesRead these before starting Phase 2 analysis. They contain the numeric thresholds that separate a generic audit from an expert one.
Before auditing, verify that the policy assumptions underpinning this audit are current.
../shared/policy-registry.json and check each entry: if last_verified + stale_after_days < today's date, the entry is stale.area (e.g., "Google Ads broad match behavior changes 2026"). Compare findings against the assumption field. If discrepancies are found:
⚠️ Policy drift detected: [area] — [brief description of what changed]. Recommendations in this area may need manual verification.policy-registry.json with corrected assumptions and today's date.If no high-volatility entries are stale, proceed directly to Phase 1 with no output from this step.
One call does it all. audit runs ~17 queries in parallel server-side and returns pre-computed findings.
Call audit(accountId, days=30) (max 90 days, capped by impression-share data limit). The response shape:
{
account: { name, currency, timezone, autoTagging, trackingTemplate },
dateRange: { start, end, days },
summary: { totalSpend, totalConversions, totalConversionValue, totalClicks,
totalImpressions, cpa, ctr, conversionRate, roas, activeCampaigns },
pulse: { wasteRate, wasteUsd, demandCaptured, cpa }, // pre-computed metrics
campaigns: [{ // per-campaign detail
id, name, type, status, spend, conversions, cpa, ctr,
impressionShare, budgetLostIS, rankLostIS, isMatrix, // "healthy" | "capital_problem" | "relevance_problem" | "structural_problem"
biddingStrategy, targetCpa, searchPartners, displayNetwork,
weightedQS, lowQSSpendPct, negativeKeywordCount,
adGroups: [...], topAds: [...], topKeywords: [...],
deviceBreakdown: { MOBILE, DESKTOP, TABLET, ... }
}],
findings: {
wastedKeywords: [...], // top 10 by spend, 0 conversions, >10 clicks
wastedSearchTerms: [...], // top 10 by spend, 0 conversions
brandLeakage: { detected, variants, totalSpend, terms },
miningOpportunities: [...], // top 10 converting search terms (≥2 conv)
budgetConstrainedWinners: [...], // good CPA + high budget-lost-IS
negativeConflicts: [...], // negatives blocking converters
hasAudienceSegments, conversionActions, matchTypeDistribution,
assetCoverage, landingPages
},
errors?: [...] // individual sub-query failures, non-fatal
}
If the call errors out, surface the error to the user and stop. Don't retry with helper tools — the audit tool is the source of truth for this skill.
Apply scope filtering. If the user specified a scope, identify matching campaigns from campaigns[]. Log the matched names+IDs. If no campaigns match, stop and ask the user to clarify. "In-scope campaigns" = scope-matched subset (or all campaigns if no scope).
Scope + account-wide totals. summary.* and pulse.* are account-wide and are NOT re-aggregated for scoped audits. The scorecard header already notes "Scoped to: X" — reuse those totals and make the scope explicit, don't recompute. Per-campaign scores (keyword health, ad copy, IS, spend efficiency) DO filter to in-scope campaigns only.
Kick off two parallel background calls. Both run in parallel with the rest of Phase 2 scoring — don't block on them:
campaigns[].topAds[].finalUrl and start the crawl (Phase 3, Step 2).geoTargetType (enum) but not the actual location criteria list. Always run this via runGaqlQuery for in-scope campaigns; the Campaign Structure scoring rubric checks for multi-location geo structure and can't score without it:SELECT campaign.id, campaign.name,
campaign_criterion.type, campaign_criterion.negative,
campaign_criterion.location.geo_target_constant,
campaign_criterion.proximity.radius,
campaign_criterion.proximity.radius_units
FROM campaign_criterion
WHERE campaign.id IN (<in-scope campaign IDs>)
AND campaign_criterion.type IN ('LOCATION', 'PROXIMITY')
radius_units: 0 = meters, 1 = kilometers, 2 = miles.
Minimum data for a meaningful audit: summary.activeCampaigns > 0 and summary.totalSpend > 0. Otherwise skip to Phase 3 (business context) — there's nothing to score.
Work through each dimension. For each one, assign a numeric score (0-5) and a status label.
Scope-aware scoring: When the audit is scoped, score campaign-level dimensions (structure, keyword health, search terms, ad copy, impression share, spend efficiency) using only in-scope data. Account-level dimensions (conversion tracking) are scored account-wide but with notes about how issues affect the scoped campaigns. The overall health score reflects scoped performance — this gives the user a focused view of the area they care about.
Most evidence for scoring is pre-computed in the audit response. Use this map before re-deriving anything by hand:
| Dimension | Primary audit fields |
|---|---|
| Conversion tracking | account.autoTagging, findings.conversionActions, summary.totalConversions |
| Campaign structure | campaigns[].name (brand/non-brand split), campaigns[].adGroups[], findings.matchTypeDistribution, campaigns[].negativeKeywordCount |
| Keyword health | campaigns[].weightedQS, campaigns[].lowQSSpendPct, findings.wastedKeywords, pulse.wasteRate |
| Search term quality | findings.wastedSearchTerms, findings.miningOpportunities, findings.brandLeakage, findings.negativeConflicts |
| Ad copy | campaigns[].topAds[] (headlineCount, descriptionCount, adStrength), findings.assetCoverage |
| Impression share | campaigns[].impressionShare, campaigns[].budgetLostIS, campaigns[].rankLostIS, campaigns[].isMatrix (already classified) |
| Spend efficiency | pulse.wasteRate, pulse.wasteUsd, summary.cpa, campaigns[].cpa, findings.budgetConstrainedWinners |
The isMatrix enum on each campaign already classifies the IS root cause — "healthy" | "relevance_problem" | "capital_problem" | "structural_problem". Use it directly instead of re-deriving the Impression Share Interpretation Matrix cell.
The pulse.wasteRate and pulse.wasteUsd values use the same formula the report references (keyword waste + search term waste). Don't recompute — report them directly.
Read references/account-health-scoring.md for the detailed rubric per dimension. Use this summary for quick reference:
Score definitions:
| Score | Label | Meaning |
|---|---|---|
| 0 | Critical | Broken or missing entirely — actively losing money |
| 1 | Poor | Major problems — significant waste or missed opportunity |
| 2 | Needs Work | Below acceptable — several clear issues to fix |
| 3 | Acceptable | Functional but room for meaningful improvement |
| 4 | Good | Well-managed with minor optimization opportunities |
| 5 | Excellent | Best-practice level — maintain and scale |
Overall Health Score: Sum all 7 dimension scores, multiply by (100/35), round to nearest integer. This gives a 0-100 score.
| Overall Score | Label | Summary |
|---|---|---|
| 0-25 | Critical | Account has fundamental problems. Stop spending until fixed |
| 26-50 | Needs Work | Significant waste. Focus on top 3 issues before scaling |
| 51-75 | OK | Functional but leaving money on the table |
| 76-90 | Strong | Well-managed. Focus on scaling and marginal gains |
| 91-100 | Excellent | Top-tier account. Maintain and test incrementally |
1. Conversion tracking (Score 0-5)
| Score | Criteria |
|---|---|
| 0 | No conversion actions set up. Spending blind |
| 1 | Conversion actions exist but aren't firing (0 conversions recorded despite clicks) |
| 2 | Conversions tracked but auto-tagging disabled, or using only micro-conversions (page views, not leads/sales) |
| 3 | Primary conversion action firing, auto-tagging on, but multiple conversion actions counting duplicates or no value assigned |
| 4 | Clean conversion setup: primary action firing, auto-tagging on, values assigned, no duplicate counting |
| 5 | Full setup: primary + secondary actions, proper attribution window, enhanced conversions or offline conversion import |
2. Campaign structure (Score 0-5)
| Score | Criteria |
|---|---|
| 0 | Single campaign with one ad group containing 50+ unrelated keywords |
| 1 | Some structure but ad groups have 30+ keywords with mixed intent (e.g., "plumber" and "plumbing school" in same group) |
| 2 | Campaigns exist per service/product but ad groups are too broad (15-30 keywords of mixed theme) |
| 3 | Campaigns per service, ad groups by theme (5-20 keywords), but missing brand campaign separation or geo structure |
| 4 | Clean structure: brand separated, services split, tight ad groups, appropriate geo targeting |
| 5 | Optimal: brand/non-brand split, service campaigns, geo-specific where relevant, ad groups of 5-15 tightly themed keywords, negative keyword lists at appropriate levels |
../ads/references/campaign-structure-guide.md for the ideal structure patterns3. Keyword health (Score 0-5)
| Score | Criteria |
|---|---|
| 0 | No keywords with conversions. Average QS < 3. >50% of keywords are zombies (0 impressions 30+ days) |
| 1 | Average QS 3-4. >30% of spend on non-converting keywords. Heavy use of broad match without negatives |
| 2 | Average QS 4-5. 20-30% of spend on non-converting keywords. Some match type issues |
| 3 | Average QS 5-6. 10-20% wasted spend. Reasonable match type mix but gaps in negative coverage |
| 4 | Average QS 6-7. <10% wasted spend. Good match type strategy. Solid negative keyword lists |
| 5 | Average QS 7+. <5% wasted spend. Tight match types. Comprehensive negatives. Regular search term mining |
4. Search term quality (Score 0-5)
| Score | Criteria |
|---|---|
| 0 | >40% of search terms are irrelevant. No negative keywords in place |
| 1 | 30-40% irrelevant terms. Minimal negative keyword coverage |
| 2 | 20-30% irrelevant terms. Some negatives but obvious gaps |
| 3 | 10-20% irrelevant terms. Decent negative coverage. Some converting terms not yet added as keywords |
| 4 | <10% irrelevant terms. Good negative lists. Most high-converting terms already added as keywords |
| 5 | <5% irrelevant terms. Comprehensive negative lists at account and campaign level. Active search term mining program |
../ads/references/search-term-analysis-guide.md5. Ad copy (Score 0-5)
| Score | Criteria |
|---|---|
| 0 | No active ads, or only legacy expanded text ads (no RSAs) |
| 1 | RSAs exist but only 1 per ad group. Headline/description variety is poor (repetitive messaging) |
| 2 | 1-2 RSAs per ad group. Some variety but headlines don't include keywords or location |
| 3 | 2+ RSAs per major ad group. Headlines include keywords. Pinning used on H1. Some CTR variation suggests testing is happening |
| 4 | 2-3 RSAs per ad group with distinct messaging angles. Good headline variety (service, value prop, trust, CTA). CTR above industry average |
| 5 | Active A/B testing program. Multiple RSAs with measurably different angles. Regular losers paused, winners iterated. CTR consistently above benchmark |
6. Impression share (Score 0-5) — Data limit: the audit tool caps lookback at 90 days because Google's impression share metrics only support 90 days (not 365).
| Score | Criteria |
|---|---|
| 0 | Search IS < 20%. Missing >80% of potential traffic |
| 1 | Search IS 20-35%. Budget-lost IS > 40% OR rank-lost IS > 60% |
| 2 | Search IS 35-50%. Significant losses from both budget and rank |
| 3 | Search IS 50-65%. Moderate losses — budget-lost IS < 25% and rank-lost IS < 40% |
| 4 | Search IS 65-80%. Losses primarily from rank (fixable with QS improvements) |
| 5 | Search IS > 80%. Brand campaign IS > 95%. Losses are marginal and strategic (intentionally not competing on some queries) |
Use the Impression Share Interpretation Matrix to diagnose the root cause:
| Rank-Lost IS < 30% | Rank-Lost IS 30-50% | Rank-Lost IS > 50% | |
|---|---|---|---|
| Budget-Lost IS < 20% | Healthy — optimize at margins | QS/Bid Problem — improve ads, landing pages, or raise bids on high-QS keywords | Quality Crisis — QS is the bottleneck. Fix ad relevance and landing page experience before spending more |
| Budget-Lost IS 20-40% | Budget Problem — increase budget or narrow targeting. Check if the campaign is profitable enough to justify more spend | Mixed Problem — fix quality first (cheaper than adding budget), then reassess | Structural Problem — bidding on too-competitive keywords. Shift to long-tail and exact match |
| Budget-Lost IS > 40% | Severe Budget Gap — if CPA is good, this is the highest-ROI fix in the account. Double budget or cut keyword count by 50% | Priority: fix rank issues to get more from existing budget, then add budget | Fundamental Misalignment — pause, restructure, then restart. Current approach is burning money |
7. Spend efficiency (Score 0-5)
| Score | Criteria |
|---|---|
| 0 | No conversion data available. Flying blind on efficiency |
| 1 | CPA > 200% of industry average. >40% of spend on non-converting entities |
| 2 | CPA 150-200% of industry avg. 25-40% wasted spend. Major budget misallocation between campaigns |
| 3 | CPA 100-150% of industry avg. 15-25% wasted spend. Some misallocation |
| 4 | CPA within industry norms. <15% wasted spend. Budget roughly proportional to conversion share per campaign |
| 5 | CPA below industry avg. <5% wasted spend. Budget allocation optimized — each campaign's budget share matches its conversion share |
Waste is pre-computed in pulse.wasteUsd and pulse.wasteRate. The formula matches /ads:
conversions = 0 AND clicks > 10 (top 10 in findings.wastedKeywords)conversions = 0 AND clicks ≥ 10 (top 10 in findings.wastedSearchTerms)campaigns[].displayNetwork === true) and poor performance. The audit flags displayNetwork per campaign; cross-check against low conversion rate to call this out.Report pulse.wasteUsd in the verdict paragraph. Individual wasteful keywords/terms from findings.wastedKeywords and findings.wastedSearchTerms appear as evidence under the relevant dimension (keyword health or search term quality) — max 3 examples per category, not exhaustive lists.
Discover 2-3 customer personas from the ad data. This runs in parallel with Phase 3 (business context questions) — it uses only the data already pulled in Phase 1.
All signals come from the audit response already in memory — no extra calls needed.
| Source | What it reveals | Audit field |
|---|---|---|
| Wasted + mining search terms | Customer language and intent — the terms they use and which convert | findings.wastedSearchTerms, findings.miningOpportunities |
| Per-campaign top keywords | What the business bids on and what actually gets clicks | campaigns[].topKeywords |
| Ad group names | How the business segments its services — each theme may serve a different persona | campaigns[].adGroups[].name |
| Landing page URLs | Where they land — different pages suggest different customer journeys | findings.landingPages, campaigns[].topAds[].finalUrl |
| Geographic signal | Where they are — metro vs rural | campaigns[].geoTargetType + the supplemental geo GAQL if you ran it |
| Device split | How they search — mobile-heavy suggests on-the-go/urgent need | campaigns[].deviceBreakdown |
Use this full template for the persisted JSON file. In the report output, personas appear as a compact 3-column table (name, example searches, value) — see Phase 4. The JSON file has the full detail for downstream skills like /ads-copy:
| Field | Description | Example |
|---|---|---|
| Name | Descriptive label capturing their defining trait | "The Emergency Caller" |
| Demographics | Role, context, location type | Homeowner, suburban, dual-income household |
| Primary goal | What they're trying to accomplish RIGHT NOW | Fix a burst pipe before it damages the floor |
| Pain points | What's driving them to search | Can't wait for regular business hours. Worried about cost. Doesn't know who to trust |
| Search language | Actual search terms from the data that this persona uses | "emergency plumber near me", "plumber open now", "burst pipe repair cost" |
| Decision trigger | What makes them click the ad and convert | Seeing "24/7" and "Same Day" in the headline. Phone number in the ad. Reviews mentioned |
| Value to business | Estimated revenue or conversion value | High urgency = willing to pay premium. Avg ticket $350-800 |
Save to {data_dir}/personas/{accountId}.json:
{
"account_id": "1234567890",
"saved_at": "2024-01-15T10:30:00Z",
"personas": [
{
"name": "The Emergency Caller",
"demographics": "Homeowner, suburban, any age",
"primary_goal": "Fix an urgent problem right now",
"pain_points": ["Can't wait", "Worried about cost", "Doesn't know who's reliable"],
"search_terms": ["emergency plumber near me", "plumber open now", "burst pipe repair"],
"decision_trigger": "24/7 availability, phone number visible, reviews",
"value": "High — willing to pay premium for urgency"
}
]
}
These personas feed directly into /ads-copy for headline generation and /ads for keyword strategy.
Skip this phase for scoped audits if {data_dir}/business-context.json already exists and has a recent audit_date. A scoped audit (e.g., "focus on grooming") should deliver findings fast, not re-interview the user. Only run Phase 3 on the first full-account audit or if business-context.json is missing/stale (>90 days old).
Pull as much as possible from the data you already have — only ask the user for what you can't infer.
All fields below derive from the audit response already collected in Phase 1.
| Field | Audit field |
|---|---|
business_name | account.name |
services | campaigns[].name, campaigns[].adGroups[].name, campaigns[].topKeywords |
locations | campaigns[].geoTargetType + supplemental geo GAQL (if you ran it) |
brand_voice | campaigns[].topAds[] (headlines/descriptions) |
keyword_landscape.high_intent_terms | Converting keywords in campaigns[].topKeywords with conversions > 0 |
keyword_landscape.competitive_terms | Keywords in campaigns where isMatrix !== "healthy" and rankLostIS > 0.3 |
keyword_landscape.long_tail_opportunities | findings.miningOpportunities |
website | Apex domain from campaigns[].topAds[].finalUrl or findings.landingPages[].url |
The crawl starts in the background immediately after audit returns (see Phase 1, "After the audit call"). By the time you reach Phase 3, results should be ready.
Step 1: Resolve the website URL
Find the website URL from the audit response, in priority order:
campaigns[].topAds[].finalUrl — extract the root domain (e.g., https://example.com). Normalize to the apex domain (strip www. and subdomain prefixes) before frequency-counting across all ads. Use the most common domain.findings.landingPages[].url (same normalization) if topAds are sparse.Step 2: Crawl the website
Issue all WebFetch calls in a single tool-use turn so they run in parallel. If any individual fetch fails (404, timeout, blocked), skip that page and continue.
| Page | URL pattern | Why |
|---|---|---|
| Homepage | {root_url} | Services overview, hero messaging, trust signals, brand voice |
| About page | {root_url}/about | Differentiators, history, team, social proof |
| Services page | {root_url}/services | Full service list, service descriptions |
| Top ad landing pages | Up to 3 unique final URLs from ads, excluding any URL that matches the homepage, about, or services pages already being fetched | What the ads actually link to — offers, CTAs, messaging |
Fallback if /about or /services return 404: Try one fallback each:
/about-us (most common variant)/our-services (most common variant)If the fallback also 404s, move on — don't spider the site.
Detecting unusable pages: If a fetched page has fewer than 50 words of visible text (excluding HTML tags, scripts, and navigation), or if the primary content is a login/auth form (email/password fields, "Sign In" as the main heading), treat it as a failed fetch and skip it for extraction.
Step 3: Extract business context from crawled pages
Scan the fetched page content for these signals. Merge with what you already inferred from account data — website data fills gaps, account data confirms what's active.
| Field | What to look for on the website |
|---|---|
services | Service names from navigation, headings, service cards. Merge with services inferred from campaigns — the website may list services not yet advertised |
differentiators | "Why choose us" sections, hero subheadings, unique value claims (e.g., "Family-owned since 1998", "Same-day service guaranteed") |
social_proof | Review counts, star ratings, award badges, "As seen in" logos, certifications, years in business, number of customers served |
offers_or_promotions | Banner offers, hero CTAs with discounts, seasonal promotions, "Free estimate" or "X% off" |
brand_voice | Tone of headlines and body copy — professional vs casual, technical vs approachable. Capture 3-5 literal phrases from the site that exemplify the tone |
target_audience | Who the site speaks to — homeowners vs businesses, specific industries, demographic cues |
locations | Footer addresses, "Areas we serve" pages, location-specific content |
landing_pages | Map each ad final URL to a summary of what's on that page (headline, primary CTA, offer if any) |
industry | What the business clearly does — confirm or refine what campaign names suggest |
competitors | Look for comparison tables or "vs" pages linked from the nav |
Important: Only extract from pages you actually retrieved with usable content. If the homepage is all you got, that's fine — it usually has the most signal. Extract in the site's original language — downstream skills handle translation when generating English ad copy.
If all pages failed or returned no usable content, skip website extraction entirely and proceed to the full question set below (do not skip any questions).
Present what you inferred from both account data and the website crawl, then ask for what's still missing.
Always ask (these are rarely on websites):
differentiators (ask even if the website had a "why us" section — the owner's answer is often sharper than marketing copy)competitorsseasonalityAsk only if not found in account data or website crawl:
Write the complete business context to {data_dir}/business-context.json:
{
"business_name": "",
"industry": "",
"website": "",
"services": [],
"locations": [],
"target_audience": "",
"brand_voice": {
"tone": "",
"words_to_avoid": [],
"words_to_use": []
},
"differentiators": [],
"competitors": [],
"seasonality": {
"peak_months": [],
"slow_months": [],
"seasonal_hooks": []
},
"keyword_landscape": {
"high_intent_terms": [],
"competitive_terms": [],
"long_tail_opportunities": []
},
"social_proof": [],
"offers_or_promotions": [],
"landing_pages": {},
"notes": "",
"audit_date": "",
"account_id": ""
}
Include audit_date (today's date) and account_id so future skills know when this was last refreshed.
The report follows an onion structure — lead with the verdict, then actions, then evidence. The reader should get the full picture in the first 10 lines, and only needs to keep reading if they want the supporting data.
The #1 rule: no duplication. Each finding appears in exactly one place. The scorecard summarizes, the actions tell you what to do, the evidence shows why. If something is in the scorecard's "Key Finding" column, don't repeat it in the evidence section.
# [Business Name] — Ads Audit
**[Score]/100 · $X,XXX spent (30d) · XX conversions at $XX CPA**
[If scoped] Scoped to: [description]
[2-3 sentence verdict. What's working, what's broken, and the single biggest
opportunity in dollar terms. This paragraph should be enough for someone who
won't read further.]
## What to Fix (in order)
1. **[Specific action]** — [1-line why + expected dollar/conversion impact]
2. **[Specific action]** — [1-line why + expected impact]
3. **[Specific action]** — [1-line why + expected impact]
Run `/ads` to execute any of these.
## Scorecard
| Dimension | Score | Key Finding |
|-----------|-------|-------------|
| Conversion tracking | X/5 | [one line] |
| Campaign structure | X/5 | [one line] |
| Keyword health | X/5 | [one line] |
| Search term quality | X/5 | [one line] |
| Ad copy | X/5 | [one line] |
| Impression share | X/5 | [one line] |
| Spend efficiency | X/5 | [one line] |
## Evidence
[Only include dimensions scoring 0-2. Each dimension gets ONE compact block.
Do NOT repeat what's already in the scorecard or actions — add the supporting
data that explains the score.]
### [Dimension] (X/5)
[2-4 lines of data: the specific keywords, search terms, or metrics that
drove the score. Top 3 examples max — not exhaustive lists. End with the
fix if it wasn't already an action item above.]
## Personas
| Persona | Example searches | Value |
|---------|-----------------|-------|
| [name] | [2-3 terms] | [why they matter] |
## Questions for You
[Only if business context has gaps that matter for the recommendations.
Max 2-3 questions. Don't ask what you can infer from the data.]
/ads skillThese rules prevent the bloated, repetitive reports that make audits hard to read:
/ads.After the report, add ONE handoff based on the biggest issue found:
| Condition | Handoff |
|---|---|
| Ad copy scored 0-2 | Suggest /ads-copy for RSA variants |
| Impression share scored 0-2 | Suggest /ads for bid optimization |
| 3+ converting search terms not yet keywords | Offer to add them via /ads |
| Wasted spend > 15% | Offer to pause/negative via /ads |
| High CTR but low conversion rate | Suggest landing page audit |
Don't list all possible handoffs — pick the one that matches the #1 action item.
audit (one tool call) before asking the user anything. Show up informed.business-context.json — this is the handoff to every other ads skill./ads. Offer to switch to /ads for implementation.