From toprank
Audits Meta Ads (Facebook + Instagram) account health, gathers business context like personas and funnel events, and persists JSON artifacts for reuse by other Meta ads skills.
npx claudepluginhub nowork-studio/toprank --plugin toprankThis skill uses the workspace's default tool permissions.
Diagnose Meta (Facebook + Instagram) account health and persist business context for downstream skills (`/meta-ads`). **Read-only** — never mutates the account. The user runs `/meta-ads` to execute fixes you recommend.
Evaluates Meta Ads on Facebook/Instagram via 50 checks for Pixel/CAPI health, creative diversity/fatigue, account structure, audience targeting, and Advantage+. Generates health score and action plan.
Manages Meta Ads (Facebook + Instagram): analyzes performance (ROAS, CPM, frequency, audience overlap, learning phase, creative fatigue); optimizes budgets, ad sets, campaigns via MCP server.
Guides Meta (Facebook/Instagram) Ads setup, campaign structure, audience targeting, creative optimization, Advantage+ automation, and best practices for demand generation.
Share bugs, ideas, or general feedback.
Diagnose Meta (Facebook + Instagram) account health and persist business context for downstream skills (/meta-ads). Read-only — never mutates the account. The user runs /meta-ads to execute fixes you recommend.
Follow ../shared/preamble.md — MCP detection, OAuth, ad account selection.
| Artifact | Path | When |
|---|---|---|
| Business context | {data_dir}/meta/business-context.json | First full audit, or refresh when audit_date is >90 days old. Skip on scoped audits if file is fresh. |
| Personas | {data_dir}/meta/personas/{accountId}.json | Every full audit. |
These are the handoff to /meta-ads — write them even if the report itself is short. Otherwise downstream skills operate without business context and produce generic output.
If a {data_dir}/business-context.json exists from /google-ads-audit (no meta/ subdir), read it as a starting point — most fields (services, brand voice, differentiators, locations, seasonality) are platform-agnostic. Then write the Meta-specific version to {data_dir}/meta/business-context.json with any Meta-specific overrides (different creative angles, different audiences, different funnel events).
business-context.json schema (shared with Google Ads where fields apply):
business_name, industry, website, services[], locations[], target_audience, brand_voice{tone, words_to_use[], words_to_avoid[]}, differentiators[], competitors[], seasonality{peak_months[], slow_months[], seasonal_hooks[]}, social_proof[], offers_or_promotions[], landing_pages{}, unit_economics{aov_usd, profit_margin, ltv_usd, source}, notes, audit_date, account_id.
Meta-specific extensions:
meta_funnel_events{top_of_funnel, mid_of_funnel, conversion}, creative_inventory{concepts[], formats[], aspect_ratios[]}, custom_audiences{purchasers, abandoners, engagers, list_uploads[]}, pixel_health{pixel_id, capi_enabled, emq_score, last_event_at}.
personas JSON schema: {account_id, saved_at, personas: [{name, demographics, primary_goal, pain_points[], decision_trigger, value, meta_creative_angles[], visual_cues[]}]}. The Meta version adds meta_creative_angles (e.g. "before/after demonstration", "founder-led explainer", "UGC review") and visual_cues (objects, settings, emotions that resonate with this persona). See references/persona-discovery.md.
Read ../shared/policy-registry.json. For each entry where last_verified + stale_after_days < today:
area for recent Meta Ads changes; compare to assumption. If drift, banner the report and suggest registry update.The Meta platform changes faster than Google Ads (Advantage+, attribution, learning behaviors) — check high-volatility entries every audit.
Use a single runScript call with ads.graphParallel to fan out the queries an audit needs. Build the fan-out from this rubric.
A complete audit needs at minimum:
/{accountId}) — currency, timezone, business id, spend cap, account status, balance./{accountId}/customconversions + /{accountId}/adspixels) — pixel id, last activity, CAPI status, Event Match Quality (EMQ) score./{accountId}/campaigns) — id, name, objective, status, daily/lifetime budget, special_ad_categories, buying_type, bid_strategy, created_time. Last 90 days./{accountId}/adsets) — id, name, status, campaign_id, optimization_goal, billing_event, bid_strategy, daily_budget, lifetime_budget, attribution_spec, targeting (summary), promoted_object, learning_stage_info./{accountId}/ads) — id, name, status, ad set, creative summary (image/video, primary text, headline, description, CTA), effective_status.ads.insights({level:"campaign", date_preset:"last_30d"})) — spend, impressions, reach, frequency, cpm, link CTR, link clicks, purchases (or other primary action), purchase value, ROAS, CPA.publisher_platform,platform_position), age/gender, device. Use these to spot placement losers and audience composition./{adsetId} last_modified or /{adsetId} change history.Compute aggregates in the script, return summarized JSON. Don't return all rows — rank, slice, summarize. The agent narrates the result; the script does the math.
suggestImprovement is a useful cross-check for the server's heuristic surface — call it as a separate tool after the runScript pass if you want to compare your findings.
If a critical query errors out (auth, schema, API version), surface the error and stop — don't fall back to a degraded audit.
Skip scoring entirely if totalSpend == 0 or activeCampaigns == 0. Go straight to business context.
If the user narrows the audit ("focus on one campaign", "campaign X", "just check creative fatigue"):
business-context.json is fresh.Score each of the 7 dimensions 0–5 using references/account-health-scoring.md. Overall = round(sum × 100 / 35).
| Score | Label | Meaning |
|---|---|---|
| 0 | Critical | Broken or missing — actively losing money |
| 1 | Poor | Major waste or missed opportunity |
| 2 | Needs Work | Several clear issues |
| 3 | Acceptable | Functional, room to improve |
| 4 | Good | Well-managed, minor opportunities |
| 5 | Excellent | Best-practice |
Scope-aware: campaign-level dimensions reflect in-scope data; account-level dimensions (Pixel + CAPI, attribution setup) score account-wide with a note on scope impact.
| EMQ < 5 | EMQ 5–6.9 | EMQ 7.0+ | |
|---|---|---|---|
| CAPI off | Critical — flying blind | Critical — most events lost | High — leaving 15–25% of events on the table |
| CAPI on, dedup off | Critical — duplicated and weak signal | High — duplicate counting risk | Medium — match quality improves with dedup |
| CAPI on, dedup on | High — match quality is the bottleneck | Medium — improve event_id coverage | Healthy |
Derive what you can from the data already pulled:
| Field | Source |
|---|---|
business_name | Ad account name (/{accountId} name field) |
services | Top campaigns by spend, ad set names, top-converting ad creatives |
locations | Targeting geo summary (countries / regions in active ad sets) |
brand_voice | Top-performing ad copy (primary text + headline) |
creative_inventory.formats | Mix of image / video / carousel observed in active ads |
creative_inventory.aspect_ratios | Aspect ratios across active ads (1:1, 4:5, 9:16) |
meta_funnel_events.conversion | Most common optimization event on top-spending ad sets |
custom_audiences | Custom audiences referenced in active ad set targeting |
pixel_health | From the Pixel detail call |
website | Apex domain from active ad final URLs |
Then crawl the website (homepage + about + 1–2 top landing pages, parallel WebFetch) and merge into the schema. See references/business-context.md for the full crawl procedure.
Always ask the user: differentiators, competitors, seasonality, AOV + profit margin (essential for ROAS-aware scoring). Ask for everything else only if data + crawl can't answer it.
Discover 2–3 personas from creative performance (which angles convert), top-spending audiences, and landing-page content — all from the dataset already in memory. Persist to {data_dir}/meta/personas/{accountId}.json. Each persona must be grounded in observable evidence (a converting ad set, a converting creative angle, a landing-page section) — no inventing. See references/persona-discovery.md.
Lead with the verdict, then the top 3 actions (with dollar impact when possible), then the scorecard, then evidence for dimensions scoring 0–2 only. Cite specific campaigns, ad sets, ads, and dollar amounts. Cap at ~80 lines.
End with a single closing line after the handoff to /meta-ads:
Your audit history is saved to your NotFair account — view it at https://notfair.co.
/meta-ads. End the report with one handoff tied to the #1 action.meta/business-context.json and meta/personas/{accountId}.json even if the report itself is short — downstream skills depend on them.