From pm-data
Builds tailored metrics frameworks for products or businesses, from North Star metric and metric tree to counter-metrics and dashboards. Use for KPI trees, AARRR, HEART, or OKR requests.
npx claudepluginhub mohitagw15856/pm-claude-skills --plugin pm-dataThis skill uses the workspace's default tool permissions.
This skill builds a complete metrics framework tailored to a product or business. It connects the North Star metric to actionable leading indicators, making it clear which metrics to track, which to optimise, and how they relate to each other.
Designs OKR trees, KPI frameworks, North Star metrics, leading/lagging indicators, and A/B experiment guardrails for team goals and measurement.
Defines product metrics frameworks with North Star metric, input/output tree, and counter-metrics. Use for KPIs, measurement setup, or avoiding vanity metrics.
Designs product metrics dashboards with North Star metrics, input KPIs, data sources, visualizations, alert thresholds, and review cycles. Useful for KPI setup, analytics, monitoring plans.
Share bugs, ideas, or general feedback.
This skill builds a complete metrics framework tailored to a product or business. It connects the North Star metric to actionable leading indicators, making it clear which metrics to track, which to optimise, and how they relate to each other.
Ask the user for these if not provided:
If no framework preference is given, recommend the best fit based on stage and business model.
Explain in 2–3 sentences why you're recommending this framework for their context.
[Metric Name]: [Definition — exactly what is measured and how]
Why this is the right North Star for this business: [2–3 sentences. It should reflect customer value delivered, not just revenue or activity. Explain what behaviour it captures and why maximising it correlates with long-term business health.]
How to measure it: [Formula or data source] Current baseline: [Leave as [ADD BASELINE] for user to fill] Target: [Leave as [ADD TARGET] for user to fill]
Show how supporting metrics roll up to the North Star. Format as a hierarchy:
[North Star Metric]
├── [Driver 1: e.g. Acquisition]
│ ├── [L2 metric: e.g. Organic signups / week]
│ └── [L2 metric: e.g. Paid CAC by channel]
├── [Driver 2: e.g. Activation]
│ ├── [L2 metric: e.g. % users completing onboarding within 7 days]
│ └── [L2 metric: e.g. Time to first value action]
└── [Driver 3: e.g. Retention]
├── [L2 metric: e.g. Day 30 retention rate]
└── [L2 metric: e.g. Feature adoption depth]
For each L2 metric, provide:
[2–3 metrics to watch that prevent optimising the North Star in ways that damage the business. E.g. "If we optimise for signups, we need to watch spam account rate. If we optimise for engagement, we need to watch support ticket volume."]
Suggest a 3-tier dashboard structure:
[5 questions the team should ask in their weekly metrics review to turn numbers into insights. e.g. "Is our activation rate improving while retention stays flat? That suggests onboarding quality issue, not a product-market fit problem."]