From ai-marketing-skills
Grades running ad campaigns as Red (stop)/Yellow (hold)/Green (scale) using CPA, CTR, conversions, spend data; detects creative fatigue; analyzes LTV:CAC; provides scaling recommendations.
npx claudepluginhub superamped/ai-marketing-skills --plugin ai-marketing-skillsThis skill uses the workspace's default tool permissions.
Use when reviewing a running campaign to decide what to kill, keep, or scale. Works for daily 15-minute ad reviews, weekly creative refresh planning, and monthly performance trend reviews.
Assesses ad creative fatigue risk across channels like Meta, Google, TikTok; scores creatives, predicts decline, generates refresh recommendations and A/B test plans. Use when ads underperform or need refresh timing.
Reviews ad budget allocation, bidding strategies, and scaling readiness across platforms like Google, Meta, LinkedIn. Recommends campaigns to kill or scale using 70/20/10 rule, 3x Kill Rule.
Audits marketing performance for ads (Meta, TikTok, Google) and organic channels, diagnoses root causes with decision trees, generates 48h action plan and weekly checklist.
Share bugs, ideas, or general feedback.
Use when reviewing a running campaign to decide what to kill, keep, or scale. Works for daily 15-minute ad reviews, weekly creative refresh planning, and monthly performance trend reviews.
Ask the user for:
Normalize the input into a consistent table structure:
| Ad / Ad Set | Impressions | Clicks | CTR | Conversions | Spend | CPA | Days Running |
|---|
Calculate any missing derived metrics:
Kill this ad. It's burning money.
Criteria (any one triggers Red):
Action: Turn off immediately. Redirect budget to greens.
Don't touch it. It needs more data or is borderline.
Criteria:
Action: Do nothing. Check again tomorrow. Resist the urge to tweak.
This ad is working. Give it more budget.
Criteria:
Action: Scale using the 20% Rule — increase daily budget by 20% every 48 hours.
Compare each ad's metrics against industry benchmarks:
CTR Benchmarks (by targeting type):
| Targeting | Expected CTR |
|---|---|
| Broad / run-of-network | 1–3% |
| Interest-based targeting | 2–4% |
| Lookalike / community-targeted | 3–5% |
CPA Targets by Offer Price:
| Offer Price Range | Expected CPA Range |
|---|---|
| $7–27 (low ticket) | $20–40 |
| $37–97 (mid ticket) | $40–120 |
| $97+ (high ticket) | Varies — must model LTV |
Before changing creative, check whether the ad is actually the problem. Work from the surface inward:
Work from #1 → #5 in order. Most founders jump to #3 or #4 when the actual problem is #1 or #2.
Data thresholds — don't debug on noise:
Check for fatigue signals across the data:
If fatigue is detected, flag which ads are affected and recommend:
The North Star: Is AOV > CPA?
For each green ad, calculate:
If LTV data is available, assess the pricing-level health of the campaign:
LTV:CAC Ratio Benchmarks:
| Pricing Function | Average LTV:CAC |
|---|---|
| No pricing function | 1.68 |
| Yearly pricing review | 3.23 |
| Continuous optimization | 11.09 |
Interpret the ratio:
Monetization impact reminder: A 1% improvement in monetization yields a 12.7% increase in bottom-line revenue — roughly 4x the impact of acquisition and 2x the impact of retention. If LTV:CAC is weak, the fix may be pricing, not ads.
When analyzing multi-channel campaigns, note attribution limitations:
For each green ad, provide specific scaling numbers:
The 20% Rule:
For the overall campaign:
# Campaign Analysis
**Date:** [current date]
**Campaign:** [campaign name or description]
**Period:** [date range of data]
**Target CPA:** $[amount]
**AOV:** $[amount]
---
## Traffic Light Summary
| Grade | Count | % of Spend |
|-------|-------|-----------|
| 🔴 Red (Stop) | X | X% |
| 🟡 Yellow (Wait) | X | X% |
| 🟢 Green (Scale) | X | X% |
**Campaign Health:** [Healthy / Needs Attention / Critical] — [one sentence summary]
---
## Ad-Level Grades
| Ad / Ad Set | Grade | Spend | CPA | Target CPA | CTR | Conv. | Action |
|-------------|-------|-------|-----|-----------|-----|-------|--------|
| [name] | 🔴 | $X | $X | $X | X% | X | Stop — [reason] |
| [name] | 🟡 | $X | $X | $X | X% | X | Wait — [reason] |
| [name] | 🟢 | $X | $X | $X | X% | X | Scale to $X/day |
---
## Benchmark Comparison
| Metric | Your Average | Benchmark | Status |
|--------|-------------|-----------|--------|
| CTR | X% | X–X% | ✅ On track / ⚠️ Below / 🔥 Above |
| Conversion Rate | X% | X–X% | ✅ / ⚠️ / 🔥 |
| CPA | $X | $X–X | ✅ / ⚠️ / 🔥 |
---
## Creative Fatigue Alerts
[List any ads showing fatigue signals, or "No fatigue signals detected."]
---
## Profitability
| Ad / Ad Set | CPA | AOV | Profit/Conv. | ROAS | Margin |
|-------------|-----|-----|-------------|------|--------|
| [name] | $X | $X | $X | X.Xx | X% |
**Overall ROAS:** X.Xx
**Overall Profit/Conversion:** $X
---
## Action Items
### Immediate (Today)
- [ ] Stop: [list red ads]
- [ ] Scale: [list green ads with specific new budgets]
### This Week
- [ ] [Creative refresh, new tests, etc.]
### Review Cadence
- Next daily check: [tomorrow]
- Next weekly review: [date]
- Next monthly review: [date]
---
## Scaling Plan
| Ad | Current Budget | New Budget | Apply On | Next Increase |
|----|---------------|-----------|----------|---------------|
| [name] | $X/day | $X/day | [date] | $X/day on [date] |
**Total daily spend:** $X → $X (recommended)