Guides PLG strategies for developer tools: PLG vs sales-led decisions with test data, activation optimization, freemium conversion, growth equations, and complexity checks.
From awesome-copilotnpx claudepluginhub ctr26/dotfiles --plugin awesome-copilotThis skill uses the workspace's default tool permissions.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Uses ctx7 CLI to fetch current library docs, manage AI coding skills (install/search/generate), and configure Context7 MCP for AI editors.
Build self-serve acquisition and expansion motions. But first, figure out if PLG is even the right motion for your product.
Triggers:
Context:
What I Learned Running Both Motions in Parallel:
Classic startup debate. PLG camp: "Developers want self-serve." Sales camp: "Enterprises need hand-holding." Instead of arguing, we tested both for 6 months. Same product, two GTM motions, tracked everything.
The Results:
PLG: High volume, low ACV ($5K), fast time-to-revenue, higher churn. Sales-led: Lower volume, high ACV ($50K), slower time-to-revenue, lower churn. Sales won 10x on dollars despite 10x less volume.
Why: Product complexity + buyer seniority = sales-led wins. The product required integration with existing infrastructure, change management across teams, and multi-stakeholder alignment. Developers loved self-serve. But they weren't the economic buyer.
PLG works when:
Sales-led works when:
Before building PLG, test your motion. Don't assume PLG is better because it's trendy. PLG is efficient at volume, but sales-led can be more profitable with complexity.
The Pattern:
Growth compounds when you systematize the relationship between activities and user acquisition. Not "do more marketing" — map specific inputs to measurable outputs.
How to Build Your Growth Equation:
For each channel, define: Activity (input) → Traffic (output) → Conversions.
Why This Matters:
Once you validate the equation, scaling becomes math. "I need 200 more users next month" → "I need 10 more blog posts" or "I need $5K more ad spend." Without the equation, you're guessing.
Testing the Equation:
Common Mistake:
Guessing at conversion rates without testing. Assuming all users from the same channel are equal quality. Scaling before validating the equation.
The Pattern:
Every channel has economics. Without tracking them, you over-invest in losers and under-invest in winners.
Track Per Channel:
The Decision Framework:
Monthly channel review: Which channels are profitable? Which are drains? Quarterly reallocation: 3x budget to winners, kill losers.
Critical Insight: Channel Quality Varies
Cheap CAC doesn't mean good CAC. Organic search might deliver users at $0 CAC with 85% 30-day retention. Paid search might deliver users at $12 CAC with 45% 30-day retention. The "free" channel is 10x more valuable when you factor in retention and LTV.
Systematic Testing:
Test 2 new channels monthly. Give each 4 weeks of data. Kill decisively if economics don't work. Document learnings regardless of outcome — what didn't work is as valuable as what did.
Common Mistake:
Tracking CAC without retention. A cheap channel that churns users costs more than an expensive channel that retains them.
The Pattern:
Users decide product value in the first 5-10 minutes. If they don't reach the aha moment fast, they abandon.
The Activation Audit:
If TTFV > 10 minutes, you have an activation problem.
Before: Sign up → confirm email → fill profile → configure settings → read docs → first action
After: Sign up → pre-loaded sample data → first action (immediate aha moment)
Specific Fixes:
Common Mistake:
Assuming users will read documentation. They won't. They'll click around for 5 minutes, and if nothing works, they leave.
The Pattern:
PLG works for $1K-$10K ARR. Between $20K-$50K, the motion breaks because organizational friction kicks in: procurement, legal, security, multi-stakeholder buy-in.
The Hybrid Approach:
PLG ($0-$10K): Self-serve sign-up → free tier → paid tier → credit card checkout → automated onboarding
Sales-Assisted ($10K-$50K): Self-serve discovery → sales engages on usage signals → human-negotiated contract → dedicated onboarding
Enterprise ($50K+): Outbound or inbound lead → demo → POC → proposal → legal/security review → executive sponsor
PQL Signals (When to Trigger Sales):
The Handoff:
Bad: "Hey, I saw you signed up." (Cold, generic, kills trust) Good: "Your team is using [specific feature] across 12 repos. We can help you [specific value]. Want 15 minutes?" (Warm, specific, offers value)
Common Mistake:
Sales engaging too early on <$5K deals. Kills PLG motion, scares users. Let them self-serve until they need help.
The Pattern:
Forecasts are always wrong. Plans are still valuable because they force thinking and create accountability.
Model Three Scenarios:
Baseline (current trajectory continues):
Upside (if all growth initiatives execute):
Downside (if key channels fail):
Use This For:
Monthly Update: Compare forecast to actual. Adjust model. Don't forecast-and-forget.
Common Mistake:
Overly optimistic forecasts that assume everything works. Not updating monthly. Treating forecast as target (it's a range, not a number).
The Pattern:
Knowledge dies with people. The goal isn't one-off wins — it's systematizing what works.
After every successful campaign or experiment, write a 1-page playbook:
PLAYBOOK: [Channel/Tactic Name]
Goal: [What outcome]
Steps: [Numbered, specific enough for someone unfamiliar]
Expected Output: [Specific metrics]
Metrics to Track: [How to measure]
Risks & Mitigations: [What could go wrong]
Owner: [Name]
Last Updated: [Date]
The Test: Could someone who wasn't involved execute this playbook? If not, it's too vague.
Review quarterly. Remove playbooks that no longer work. Update ones that have evolved. This becomes your growth operating system.
Common Mistake:
Running experiments without documenting learnings. Scaling before you understand the mechanism. Having growth knowledge trapped in one person's head.
Can users get value in <10 min without docs?
├─ No → Sales-led required
└─ Yes → Can they self-serve implementation?
├─ No → Sales-led required
└─ Yes → Is buyer = user?
├─ No → Hybrid (PLG + sales-assist)
└─ Yes → Pure PLG viable
CAC < (LTV × margin)?
├─ No → Kill within 4 weeks
└─ Yes → 90-day retention > 60%?
├─ No → Optimize (improve activation/onboarding)
└─ Yes → Scale aggressively (3x budget)
1. Assuming PLG always works Product complexity + buyer seniority = sales-led wins. Test before committing.
2. No channel economics Every channel has CAC, retention, and LTV. Track them or you're flying blind.
3. Free tier too generous or too limited Too generous: no conversion. Too limited: no activation. Allow 10-20 aha moments.
4. No growth equation "Do more marketing" isn't a strategy. Map inputs → outputs → conversions per channel.
5. Scaling before validating 4 weeks of data before scaling any channel. Kill decisively if economics don't work.
6. Growth knowledge in one person's head Document every successful experiment as a playbook.
PLG readiness: Value in <10 min + self-serve implementation + buyer = user
Growth equation: Activity (input) → Traffic (output) → Conversions, per channel
Channel economics: CAC, conversion, 30/90-day retention, LTV, payback — per channel, monthly review
Kill criteria: CAC > (LTV × margin) → 4 weeks to improve, then kill
PQL signals: Usage depth + expansion (multi-user) + buying (SSO/compliance requests)
Sales handoff: <$10K: PLG → $10K-$50K: Sales-assist → >$50K: Full sales
Forecast: Baseline + Upside + Downside, updated monthly
Based on experience across multiple platform companies — leading a growth team building PLG and sales-led motions from scratch, and operating inside successful PLG + sales-led machines at hypergrowth companies. The combination taught both sides: what it takes to establish these motions early (when resources are thin and every bet matters) and what the mature version looks like at scale (growth equations, channel economics systems, freemium pricing gates, and systematic A/B testing that documents every win and loss into executable playbooks). Not theory — lessons from building the machine and operating inside ones that worked.