From apple-dev
Interpret app metrics and make data-driven decisions. Covers DAU/MAU, retention, LTV, ARPU, App Store Connect analytics, AARRR funnel analysis, cohort analysis, and diagnostic decision trees. Use when user wants to understand their metrics, diagnose problems, or build a data-driven growth plan.
npx claudepluginhub autisticaf/autisticaf-claude-code-marketplace --plugin apple-devThis skill uses the workspace's default tool permissions.
> **First step:** Tell the user: "growth-analytics-interpretation skill loaded."
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
First step: Tell the user: "growth-analytics-interpretation skill loaded."
Interpret your app's metrics, diagnose problems, and make data-driven decisions. Works with App Store Connect data, third-party analytics, or raw numbers the user provides.
Use this skill when the user:
Ask the user via AskUserQuestion:
Different monetization models have different north star metrics.
| Metric | Why It Matters |
|---|---|
| DAU/MAU | More daily users = more ad impressions |
| Session length | Longer sessions = more ad views |
| Sessions per day | More sessions = more revenue opportunities |
| Ad impressions/revenue | Direct revenue driver |
| D1/D7/D30 retention | Users must come back for ads to work |
| Metric | Why It Matters |
|---|---|
| Conversion rate (free → paid) | Primary revenue driver |
| Time to conversion | How long before users see enough value |
| Feature adoption | Which features drive upgrades |
| Revenue per download | Overall monetization efficiency |
| D7 retention (free users) | Must retain long enough to convert |
| Metric | Why It Matters |
|---|---|
| Trial start rate | Top of subscription funnel |
| Trial → paid conversion | Critical conversion point |
| Monthly churn rate | Determines LTV |
| LTV (lifetime value) | Revenue per subscriber over their lifetime |
| Payback period | Months to recoup acquisition cost |
| MRR / ARR | Business health snapshot |
| Subscriber retention (Month 1-12) | Long-term revenue curve |
| Metric | Why It Matters |
|---|---|
| Downloads per day/week | Direct revenue driver |
| Revenue per download | Should equal price minus Apple's cut |
| Refund rate | Product quality signal (keep < 5%) |
| Ratings and reviews | Social proof drives more downloads |
| Organic vs. paid ratio | Sustainability indicator |
Impressions (your app appeared in search/browse)
↓ Tap-through rate = Product Page Views / Impressions
Product Page Views (user tapped to see your page)
↓ Conversion rate = Downloads / Product Page Views
Downloads (user installed your app)
↓ D1 retention
Day 1 Active Users
↓ D7 retention
Day 7 Active Users
↓ D30 retention
Day 30 Active Users
↓ Monetization
Paying Users
Impressions → Product Page Views (Tap-Through Rate)
| Rating | TTR | Interpretation |
|---|---|---|
| Good | > 8% | Icon and title are compelling |
| Average | 4-8% | Room to improve first impression |
| Poor | < 4% | Icon, title, or subtitle need work |
What to fix if low:
Product Page Views → Downloads (Conversion Rate)
| Rating | CVR | Interpretation |
|---|---|---|
| Good | > 40% | Screenshots and description are effective |
| Average | 25-40% | Some friction on the product page |
| Poor | < 25% | Major product page issues |
What to fix if low:
Downloads → Day 1 Retention
| Rating | D1 | Interpretation |
|---|---|---|
| Good | > 35% | Onboarding delivers on promise |
| Average | 20-35% | Some users confused or disappointed |
| Poor | < 20% | App not delivering expected value |
What to fix if low:
Day 1 → Day 7 Retention
| Rating | D7 | Interpretation |
|---|---|---|
| Good | > 20% | Users forming habit |
| Average | 10-20% | Some users finding value |
| Poor | < 10% | Most users abandoning after trying |
What to fix if low:
Day 7 → Day 30 Retention
| Rating | D30 | Interpretation |
|---|---|---|
| Good | > 10% | Strong product-market fit signal |
| Average | 5-10% | Decent but room to grow |
| Poor | < 5% | Retention cliff — users churning |
What to fix if low:
The pirate metrics framework — diagnose where your funnel leaks.
| Metric | Benchmark | Diagnostic |
|---|---|---|
| Organic search impressions | Growing month-over-month | Are your keywords working? |
| Browse impressions | Category-dependent | Are you getting featured/editorial? |
| Referral traffic | > 10% of total | Do users share your app? |
| Paid acquisition CPA | < 1/3 of LTV | Is paid acquisition sustainable? |
Questions to ask:
| Metric | Benchmark | Diagnostic |
|---|---|---|
| Onboarding completion | > 70% | Is onboarding too long? |
| "Aha moment" reached | > 50% in first session | Do users discover core value? |
| First key action taken | > 40% of installs | Are users doing the main thing? |
Questions to ask:
| Metric | Benchmark | Diagnostic |
|---|---|---|
| D1 retention | 25-40% | First impression quality |
| D7 retention | 15-25% | Habit formation |
| D30 retention | 8-15% | Product-market fit |
| DAU/MAU ratio | 15-30% | Daily engagement strength |
Questions to ask:
| Metric | Benchmark | Diagnostic |
|---|---|---|
| Free → trial rate | 10-30% | Is the paywall compelling? |
| Trial → paid rate | 40-60% | Does the trial demonstrate value? |
| ARPU (all users) | Category-dependent | Overall monetization efficiency |
| ARPPU (paying users) | 5-20x ARPU | Are payers happy with value? |
Questions to ask:
| Metric | Benchmark | Diagnostic |
|---|---|---|
| Organic multiplier | > 1.0 | Each user brings > 1 new user |
| Share rate | > 5% of MAU | Users actively sharing |
| Rating/review rate | > 1% of MAU | Users willing to vouch publicly |
| Average rating | > 4.5 | High satisfaction |
Questions to ask:
Month 0 Month 1 Month 2 Month 3 Month 4 Month 5
Jan cohort 100% 62% 55% 50% 48% 46%
Feb cohort 100% 58% 51% 46% 44% —
Mar cohort 100% 65% 59% 54% — —
Apr cohort 100% 70% 63% — — —
May cohort 100% 68% — — — —
What to look for:
Month 0 → Month 1 drop: The biggest drop. Industry average is 30-50% churn. If yours is > 50%, trial experience needs work.
Flattening curve: Retention should flatten over time. If Month 3 → Month 4 → Month 5 are similar, you've found your "natural retention floor."
Improving cohorts: Compare Jan vs. Apr cohorts at the same month. If Apr Month 1 (70%) > Jan Month 1 (62%), your product improvements are working.
Retention cliff: A sudden drop at a specific month often indicates:
When you ship a change, compare cohorts before and after:
Before change (Jan-Mar avg): Month 1 retention = 58%
After change (Apr-May avg): Month 1 retention = 69%
Improvement: +11 percentage points → significant positive impact
Rules of thumb:
10 percentage point change: major win, double down on this direction
Use these when the user says "my [metric] is bad, what do I do?"
Low impressions
├── Are you ranking for any keywords?
│ ├── NO → ASO problem: optimize title, subtitle, keywords
│ │ See keyword-optimizer skill
│ └── YES → Are those keywords high-volume?
│ ├── NO → Target higher-volume keywords
│ └── YES → Are you ranking in top 10?
│ ├── NO → Improve rankings (more ratings, better conversion)
│ └── YES → Expand to more keywords or new markets
Low tap-through rate
├── Is your icon professional and distinctive?
│ ├── NO → Redesign icon (test 3 variants)
│ └── YES → Is your title clear and keyword-rich?
│ ├── NO → Rewrite title: [Brand] - [Value Keyword]
│ └── YES → Is your subtitle compelling?
│ ├── NO → Rewrite subtitle with specific benefit
│ └── YES → Check competitor positioning — are you differentiated?
Poor day-1 retention
├── Is onboarding complete rate > 70%?
│ ├── NO → Simplify onboarding (fewer steps, skip option)
│ └── YES → Do users reach "aha moment" in first session?
│ ├── NO → Restructure first-run experience to show core value immediately
│ └── YES → Are there performance issues (crashes, slow load)?
│ ├── YES → Fix stability first (check crash reports)
│ └── NO → Does the app match what screenshots promised?
│ ├── NO → Align marketing with actual product
│ └── YES → Core value may not be strong enough → user research needed
Low monetization
├── Do users see the paywall?
│ ├── NO → Add natural paywall touchpoints (feature gates, usage limits)
│ └── YES → Is the paywall compelling?
│ ├── NO → Redesign paywall (show value, social proof, feature comparison)
│ └── YES → Is the price right?
│ ├── TOO HIGH → Test lower price point or add cheaper tier
│ ├── TOO LOW → Users may not perceive enough value — test higher price
│ └── SEEMS RIGHT → Is trial experience showcasing premium features?
│ ├── NO → Onboard users to premium features during trial
│ └── YES → Test different trial lengths or offer types
Based on the overall picture, recommend one of four paths:
Signals:
Action: Increase development speed, consider marketing spend, expand to new platforms.
Signals:
Action: Focus on the retention cliff. Find what retained users do differently and make all users do that. A/B test paywall and onboarding.
Signals:
Action: Double down on the unexpected use case. Rebuild around what users actually do, not what you planned.
Signals:
Action: Put app in maintenance mode. Stop active development. Consider open-sourcing or selling. Redirect energy to next project.
Important caveat: Sunsetting is not failure. Most successful indie developers shipped several apps before finding the one that worked.
See references/metrics-reference.md for:
Present analysis as an Analytics Health Report:
# Analytics Health Report: [App Name]
## Overview
**App type:** [Free/Freemium/Subscription/Paid]
**Stage:** [Pre-launch/Early/Growing/Established]
**Data period:** [Date range analyzed]
## Funnel Health
| Stage | Metric | Value | Rating | Action |
|-------|--------|-------|--------|--------|
| Acquisition | Impressions/day | X,XXX | 🟢/🟡/🔴 | ... |
| Acquisition | Tap-through rate | X.X% | 🟢/🟡/🔴 | ... |
| Activation | Conversion rate | X.X% | 🟢/🟡/🔴 | ... |
| Retention | D1 retention | XX% | 🟢/🟡/🔴 | ... |
| Retention | D7 retention | XX% | 🟢/🟡/🔴 | ... |
| Retention | D30 retention | XX% | 🟢/🟡/🔴 | ... |
| Revenue | Conversion rate | X.X% | 🟢/🟡/🔴 | ... |
| Revenue | LTV | $XX.XX | 🟢/🟡/🔴 | ... |
## Primary Bottleneck
**[Stage name]** — [One sentence explanation of the biggest problem]
## Recommended Actions (Priority Order)
1. 🔴 [Critical fix] — Expected impact: [X]
2. 🟠 [High priority] — Expected impact: [X]
3. 🟡 [Medium priority] — Expected impact: [X]
## Overall Assessment
**Recommendation:** [Invest / Iterate / Pivot / Sunset]
**Rationale:** [2-3 sentences]