From majestic-marketing
Optimizes product-led growth funnels for AI agents, covering query handling, documentation scanning, feature matching, and recommendations. Use for AI-driven product discovery.
npx claudepluginhub majesticlabs-dev/majestic-marketplace --plugin majestic-marketingThis skill uses the workspace's default tool permissions.
**Old PLG Funnel:**
Delivers GTM strategies for AI products: positioning, accountability objections, variable-cost pricing, copilot/agent framing, and enterprise sales of autonomous tools.
Provides go-to-market strategies for AI products: positioning, handling responsibility objections when AI breaks, pricing variable-cost usage, copilot/agent/teammate framing, selling autonomous tools to enterprises.
Optimizes content for citation and visibility in AI search engines like ChatGPT, Perplexity, Claude, Gemini, and Google AI Overviews. For AI SEO, AEO, GEO, or LLM mentions.
Share bugs, ideas, or general feedback.
Old PLG Funnel:
Landing Page → Free Trial → Activation → Conversion
New PLG Funnel:
Agent Query → Documentation Scan → Feature Match → Recommendation
The buyer's first interaction is no longer your landing page—it's an AI agent scanning your documentation to answer their question.
What happens: User asks AI "What tool can help me [problem]?"
Optimization goals:
Tactics:
| Action | Why It Works |
|---|---|
| Entity building | AI must know your brand exists and what category it's in |
| Third-party mentions | Reviews, comparisons, listicles feed AI training data |
| Clear positioning | "X is a [category] that [primary benefit]" statements |
Audit questions:
Tool: entity-builder agent for authority building
What happens: AI scans your docs, help center, marketing pages to understand capabilities.
Optimization goals:
Tactics:
| Action | Why It Works |
|---|---|
| Answer-first structure | AI extracts the first sentence as the answer |
| FAQ sections | Pre-formatted Q&A is ideal for extraction |
| Structured data | Tables, bullets, headers signal discrete facts |
| Standalone sections | AI may only see one chunk, not the full page |
The Extractability Checklist:
☐ First sentence directly answers the page's implied question
☐ H2/H3 headers are questions or clear topic labels
☐ Tables used for comparisons and feature lists
☐ Each section makes sense without surrounding context
☐ No "as mentioned above" or "see below" dependencies
Tool: llm-optimizer agent for content optimization
What happens: AI matches user's specific needs to your product's capabilities.
Optimization goals:
Tactics:
| Action | Why It Works |
|---|---|
| Problem → Feature mapping | "If you need X, [Product] does Y" |
| Use-case pages | Dedicated pages per job-to-be-done |
| Integration lists | AI checks compatibility requirements |
| Pricing clarity | AI needs to match budget constraints |
Feature Documentation Template:
## [Feature Name]
**Problem it solves:** [User problem in their words]
**How it works:** [1-2 sentence explanation]
**Best for:** [Specific use cases]
**Limitations:** [What it doesn't do]
**Example:** [Concrete scenario]
Anti-pattern: Feature pages that describe functionality without connecting to user problems.
What happens: AI decides whether to recommend your product and how to position it.
Optimization goals:
Tactics:
| Action | Why It Works |
|---|---|
| Comparison content | "X vs Y" pages AI directly references |
| Quantified outcomes | "Reduces time by 40%" > "saves time" |
| Review presence | G2, Capterra reviews influence AI recommendations |
| Product mentions in answers | Every content piece connects back to product |
The Product Tie-Back Rule: Every 1-2 paragraphs of educational content should include how your product relates.
Tool: aeo-scorecard skill for measuring recommendation success
| PLG Stage | AEO Concept | Metric |
|---|---|---|
| Agent Query | Entity/Authority | AI Visibility % |
| Documentation Scan | Extractability | Citation Rate |
| Feature Match | Fact-Density | Feature mention accuracy |
| Recommendation | Product Tie-Back | AI Share of Voice |
1. Test 10 queries your buyers ask
→ Does your brand appear? (Stage 1)
2. Check if AI cites YOUR content
→ Or competitor/third-party? (Stage 2)
3. Ask AI about specific features
→ Does it know your capabilities? (Stage 3)
4. Ask "Should I use [Product] for [use case]?"
→ What's the recommendation? (Stage 4)
| Symptom | Stage Broken | Fix |
|---|---|---|
| Brand unknown to AI | Query | Entity building, third-party mentions |
| AI cites competitors' content | Documentation | Improve extractability, answer-first |
| AI misunderstands features | Feature Match | Rewrite feature docs with problem framing |
| AI recommends competitor | Recommendation | Strengthen differentiation, add social proof |
llm-optimizer - Deep content optimization for Stage 2entity-builder - Authority building for Stage 1aeo-scorecard - Metrics framework for all stages/aeo-workflow - Full implementation workflowquery-expansion-strategy - Understanding query fan-out