Delivers GTM strategies for AI products: positioning, accountability objections, variable-cost pricing, copilot/agent framing, and enterprise sales of autonomous tools.
From awesome-copilotnpx claudepluginhub ctr26/dotfiles --plugin awesome-copilotThis skill uses the workspace's default tool permissions.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Uses ctx7 CLI to fetch current library docs, manage AI coding skills (install/search/generate), and configure Context7 MCP for AI editors.
Go-to-market strategy for AI products. These aren't generic AI principles — they're patterns from selling autonomous AI agents into enterprises where "autonomous" scared buyers and "teammate" converted them.
Triggers:
Context:
What I Learned Selling Autonomous AI Agents:
Three months in, enterprise security reviews were passing fast. Good sign, right? Then the pattern emerged: security approved, but operations rejected us.
The objection wasn't "will the AI break production?" — they assumed it would break production eventually. The real question was:
"Who's responsible when the agent does something wrong?"
Not "do we trust the agent?" — "do we trust our team to handle this?"
Why This Matters:
Autonomous agents create a new operational burden. You're not selling AI capability, you're selling organizational readiness. When your agent halts production at 2am, who gets paged? Who fixes it? Who explains it to the VP?
Framework: The Accountability Cascade
Before deploying AI agents, enterprises need clear answers:
If you can't answer all three, they won't buy. Doesn't matter how good your AI is.
How This Changes Your Sales Process:
Old approach:
New approach:
The Qualification Question:
"Walk me through what happens when the agent takes an action that breaks a workflow. Who gets alerted? Who investigates? Who decides whether to roll back or fix forward?"
If they can't answer, they're not ready. Pause the deal and help them build the process first.
Common Mistake:
Treating this as a product objection ("we'll make the AI more accurate"). It's an organizational objection. More accuracy doesn't solve "who owns this at 2am?"
Pattern I've Seen Work:
Companies that succeed with AI agents already have:
Companies that struggle:
Decision Criteria:
Before demoing autonomous AI to enterprises, ask yourself: "If this breaks their production, who on their team owns the fix?" If you can't answer, they can't buy.
The Positioning Trap:
Early enterprise conversations, we positioned as "autonomous AI agent." Buyers flinched. One word change — "autonomous" → "AI teammate" — and deal progression improved measurably.
Why? Word choice shapes buyer psychology.
The Three Framings:
1. Copilot (Safest, Lowest Value)
2. Agent (Scariest, Highest Value)
3. Teammate (Sweet Spot)
The Positioning Shift:
Before: "Autonomous AI agent that handles complex workflows end-to-end"
After: "AI teammate that pairs with your engineers on complex tasks"
Specific Language Choices That Mattered:
❌ Don't say:
✅ Do say:
How to Choose Your Framing:
Does your AI make decisions without human approval?
├─ Yes → Are you selling to developers or enterprises?
│ ├─ Developers → "Agent" framing (they want autonomous)
│ └─ Enterprises → "Teammate" framing (they want control)
└─ No → "Copilot" framing (augmentation, not automation)
The Hard Truth:
You can build an agent but position it as a copilot. You can't build a copilot and position it as an agent. Product capabilities set a ceiling, positioning chooses where you land below it.
Common Mistake:
Using "autonomous" because it sounds impressive. Impressive ≠ trusted. If buyers flinch at your positioning, you've lost them.
The Pattern:
Every AI company I've worked with faces this: Customer A uses 1,000 API calls/month. Customer B uses 10,000. Do you charge Customer B 10x more? If yes, they churn. If no, your margins collapse.
The Three Models:
1. Seat-Based ($X per user/month)
2. Usage-Based ($X per API call / prediction / hour)
3. Outcome-Based ($X per outcome achieved)
What Actually Works (Hybrid):
Base fee (covers fixed costs) + variable fee (scales with value).
Example structure:
The Pricing Conversation I Wish I'd Had Earlier:
When pricing usage-based AI:
Ask the customer: "How much would it cost you to do this manually?"
If it's $0.10 per API call but saves them $2 in labor, you're underpriced. If it costs $0.50 per call but saves them $0.40, they won't use it enough to matter.
Pricing Rule:
Your variable cost should be 20-30% of customer's alternative cost. High enough to capture value, low enough that they'll use it liberally.
Common Mistake:
Copying OpenAI's pricing ($0.01 per 1K tokens) because "that's what everyone does." Your cost structure isn't OpenAI's cost structure. Your value isn't OpenAI's value. Price for your business.
The Pattern:
You can't sell AI by saying "trust us, it works." You build trust in stages.
First: Transparency (Before First Demo)
Send these three docs before they ask:
Why this works: Buyers expect to do diligence. If you send docs before they ask, you look confident and credible.
Second: Control (In the Demo)
Show them the safety mechanisms:
Why this works: Fear of "runaway AI" is real. Showing control mechanisms proves you thought about failure modes.
Third: Performance (Week 4-8)
Prove it works:
Why this works: Proof beats promises. One customer saying "we saved X hours/week" is worth 100 marketing claims.
Fourth: Scale (When They're Serious)
Show enterprise readiness:
Why this works: Enterprises don't deploy MVPs. They need proof you won't fall over at 1000 users.
The Mistake I Made:
Trying to prove performance before explaining how the AI worked. Buyers didn't trust the benchmarks because they didn't understand the system. Order matters.
Decision Criteria:
If buyers ask "how does this work?" before you've demoed, you skipped transparency. Back up and send the docs.
What Doesn't Work:
Canned demo where AI magically solves everything. Buyers think "this won't work on our messy data."
What Works:
Show the AI making a mistake and recovering. Seriously.
Demo Structure That Works:
1. The Problem (30 seconds) "Your engineers spend hours on [specific task]. Here's what that looks like."
2. The AI Attempt (60 seconds) "Here's the AI handling the same task."
3. The Human Review (30 seconds) "Here's where the engineer reviews and approves."
4. The Outcome (30 seconds) "[X hours] → [Y minutes]. Engineer still owns the outcome, AI accelerates execution."
Why This Works:
The Pattern I've Seen:
Demos with perfect AI → Buyers skeptical Demos with imperfect AI that recovers → Buyers engaged
Common Mistake:
Cherry-picking examples where AI is 100% accurate. Buyers know real-world data is messy. If you don't show messiness, they assume you're hiding it.
The Objection:
"This looks great, but what happens when the AI does something wrong?"
Bad Answer: "Our AI is 95% accurate, and we're improving it every week." (Translation: "It will break production 5% of the time, good luck with that")
Good Answer: "Great question. Let's walk through a failure scenario together."
Then Ask:
What This Does:
The Follow-Up:
"Here's what we recommend: Start with low-risk environments. Let the AI handle non-critical workflows for 2-4 weeks. See how your team handles its mistakes. Then expand scope when you're confident in the process."
Why This Works:
You're not selling perfection. You're selling a tool that requires operational maturity. Filtering for mature buyers is better than convincing immature ones.
The Pattern:
Mature buyers say: "We already have runbooks for tool failures, we'll add AI to them." Immature buyers say: "Can you make it never fail?"
Decision Criteria:
If a buyer demands 100% accuracy, walk away. They're not ready. Come back when they have incident response processes.
The Pattern:
You're competing in the AI agent space. Every competitor's homepage says the same thing: "Automate [workflow] with AI." Your differentiation requires explaining complex technical benchmarks that buyers don't understand.
This is the positioning trap: competing on features against better-funded companies on their battlefield.
How to Diagnose It:
Structural advantages that work for AI positioning:
Feature advantages that don't last:
The Test:
For every positioning claim, ask: Can a competitor copy this with a single product sprint? If yes, it's not defensible. Don't build your GTM on it.
Common Mistake:
Claiming you're "better" at what everyone does. In AI, benchmarks change monthly. Position on what's structurally different about your approach, not what's temporarily better about your model.
The Pattern:
The highest-intent enterprise buyers for AI agents are people who've already adopted a comparable tool and hit its limits. They've invested in learning, they understand the problem space, and they have a clear business case for the upgrade.
How to Identify Ceiling Moments:
The prospect has:
How to Target Them:
Why This Converts Better:
Ceiling-moment conversations convert 3-5x vs cold outreach because:
The Qualification Question:
"What's the most complex task you've tried to automate with your current tool, and where did it break down?"
If they have a specific answer with specific pain, they're a ceiling-moment buyer. If they say "it works fine," they're not ready.
Common Mistake:
Trying to convince tool-naive prospects to adopt AI agents. Bad conversion rates, long education cycles, and they'll compare you to "doing nothing" instead of "doing it better." Target buyers who already believe in the category.
Does your AI act autonomously (no approval per action)?
├─ Yes → Who are you selling to?
│ ├─ Developers → "Agent" framing
│ └─ Enterprises → "Teammate" framing
└─ No → "Copilot" framing
Can you measure customer outcomes reliably?
├─ Yes → Outcome-based (or hybrid with outcome component)
└─ No → Continue...
│
Does usage vary 5x+ by customer?
├─ Yes → Hybrid (base + usage)
└─ No → Seat-based
Do they have incident response processes for tool failures?
├─ Yes → Continue...
│ │
│ Do they have on-call rotations for production systems?
│ ├─ Yes → Qualified buyer
│ └─ No → Help them build it first
└─ No → Not ready (come back in 6 months)
1. Using "autonomous" because it sounds impressive
2. Hiding AI failure modes
3. Treating "will it break production?" as the objection
4. Pricing usage-based AI like OpenAI
5. Skipping transparency docs before demo
6. Demoing perfect AI
7. Selling to buyers who demand 100% accuracy
Enterprise objection checklist:
Positioning word choices:
Demo structure:
Trust ladder:
Pricing hybrid formula:
Based on enterprise AI agent GTM across developer tools and infrastructure. Patterns drawn from working enterprise deal cycles selling autonomous AI products — some carried directly, others supported alongside sales leadership — including the positioning trap diagnosis that shifted from feature competition to structural differentiation, the ceiling-moment qualification that improved outbound conversion significantly, and frameworks tested across security, operations, and engineering buyer personas. Not theory — lessons from deals where "autonomous" killed conversations and "teammate" converted.