Guides design of AI escalation strategies: triggers (low confidence, high stakes), types (human handoff, self-escalation), user experience, anti-patterns, and artifacts.
npx claudepluginhub owl-listener/ai-design-skills --plugin ai-alignment-reasoningThis skill uses the workspace's default tool permissions.
Escalation is what happens when the AI reaches the boundary of what it should handle alone. Designing escalation well means the user gets help instead of a dead end — and the AI knows its limits.
Assesses need to escalate AI models (Haiku to Sonnet/Opus) for tasks requiring deeper reasoning, after systematic investigation. Use in agent workflows.
Designs human-in-the-loop intervention points for agent workflows: approval gates, reviews, overrides, triggers, and strategies to minimize bottlenecks.
Plans v1→v2→v3 agency ladder for AI features: maps autonomy increases over time, defines promotion criteria, generates stakeholder artifacts. Uses CC/CD framework.
Share bugs, ideas, or general feedback.
Escalation is what happens when the AI reaches the boundary of what it should handle alone. Designing escalation well means the user gets help instead of a dead end — and the AI knows its limits.
The AI should escalate when:
The user's experience of escalation matters: