Skill

plg-ai-funnel

Install
1
Install the plugin
$
npx claudepluginhub majesticlabs-dev/majestic-marketplace --plugin majestic-marketing

Want just this skill?

Add to a custom plugin, then install with one command.

Description

Framework for Product-Led Growth in the AI agent era. Use when optimizing how AI agents discover and recommend your product, designing self-service activation flows, or building documentation for AI-driven discovery. Covers the agent query to recommendation funnel.

Tool Access

This skill uses the workspace's default tool permissions.

Skill Content

PLG AI Funnel: Product-Led Growth in the Agent Era

The Paradigm Shift

Old PLG Funnel:

Landing Page → Free Trial → Activation → Conversion

New PLG Funnel:

Agent Query → Documentation Scan → Feature Match → Recommendation

The buyer's first interaction is no longer your landing page—it's an AI agent scanning your documentation to answer their question.

The Four Stages

Stage 1: Agent Query

What happens: User asks AI "What tool can help me [problem]?"

Optimization goals:

  • Brand appears in AI's consideration set
  • Correct category association
  • Problem-solution mapping exists in AI's knowledge

Tactics:

ActionWhy It Works
Entity buildingAI must know your brand exists and what category it's in
Third-party mentionsReviews, comparisons, listicles feed AI training data
Clear positioning"X is a [category] that [primary benefit]" statements

Audit questions:

  • Does AI know your brand when asked directly?
  • Does AI associate your brand with your category?
  • Do competitors appear but you don't?

Tool: entity-builder agent for authority building

Stage 2: Documentation Scan

What happens: AI scans your docs, help center, marketing pages to understand capabilities.

Optimization goals:

  • Content is AI-extractable (chunked, structured)
  • Answers are front-loaded (not buried)
  • Each page passes the "Taco Bell Test" (stands alone)

Tactics:

ActionWhy It Works
Answer-first structureAI extracts the first sentence as the answer
FAQ sectionsPre-formatted Q&A is ideal for extraction
Structured dataTables, bullets, headers signal discrete facts
Standalone sectionsAI may only see one chunk, not the full page

The Extractability Checklist:

☐ First sentence directly answers the page's implied question
☐ H2/H3 headers are questions or clear topic labels
☐ Tables used for comparisons and feature lists
☐ Each section makes sense without surrounding context
☐ No "as mentioned above" or "see below" dependencies

Tool: llm-optimizer agent for content optimization

Stage 3: Feature Match

What happens: AI matches user's specific needs to your product's capabilities.

Optimization goals:

  • Features described in user-problem terms
  • Use cases explicitly mapped to capabilities
  • Limitations clearly stated (builds trust)

Tactics:

ActionWhy It Works
Problem → Feature mapping"If you need X, [Product] does Y"
Use-case pagesDedicated pages per job-to-be-done
Integration listsAI checks compatibility requirements
Pricing clarityAI needs to match budget constraints

Feature Documentation Template:

## [Feature Name]

**Problem it solves:** [User problem in their words]

**How it works:** [1-2 sentence explanation]

**Best for:** [Specific use cases]

**Limitations:** [What it doesn't do]

**Example:** [Concrete scenario]

Anti-pattern: Feature pages that describe functionality without connecting to user problems.

Stage 4: Recommendation

What happens: AI decides whether to recommend your product and how to position it.

Optimization goals:

  • Clear differentiation from alternatives
  • Social proof AI can cite
  • Product tie-backs throughout content

Tactics:

ActionWhy It Works
Comparison content"X vs Y" pages AI directly references
Quantified outcomes"Reduces time by 40%" > "saves time"
Review presenceG2, Capterra reviews influence AI recommendations
Product mentions in answersEvery content piece connects back to product

The Product Tie-Back Rule: Every 1-2 paragraphs of educational content should include how your product relates.

  • ❌ "Lead scoring helps prioritize prospects"
  • ✅ "Lead scoring helps prioritize prospects—[Product] automates this with AI-powered scoring"

Tool: aeo-scorecard skill for measuring recommendation success

PLG × AEO Integration

PLG StageAEO ConceptMetric
Agent QueryEntity/AuthorityAI Visibility %
Documentation ScanExtractabilityCitation Rate
Feature MatchFact-DensityFeature mention accuracy
RecommendationProduct Tie-BackAI Share of Voice

Quick Audit Workflow

1. Test 10 queries your buyers ask
   → Does your brand appear? (Stage 1)

2. Check if AI cites YOUR content
   → Or competitor/third-party? (Stage 2)

3. Ask AI about specific features
   → Does it know your capabilities? (Stage 3)

4. Ask "Should I use [Product] for [use case]?"
   → What's the recommendation? (Stage 4)

Common PLG AI Gaps

SymptomStage BrokenFix
Brand unknown to AIQueryEntity building, third-party mentions
AI cites competitors' contentDocumentationImprove extractability, answer-first
AI misunderstands featuresFeature MatchRewrite feature docs with problem framing
AI recommends competitorRecommendationStrengthen differentiation, add social proof

Related Tools

  • llm-optimizer - Deep content optimization for Stage 2
  • entity-builder - Authority building for Stage 1
  • aeo-scorecard - Metrics framework for all stages
  • /aeo-workflow - Full implementation workflow
  • query-expansion-strategy - Understanding query fan-out
Stats
Stars30
Forks6
Last CommitMar 11, 2026
Actions

Similar Skills