Systematically anticipates harms in AI products by categorizing failure modes, misuse scenarios, unintended consequences, and creating risk matrices.
npx claudepluginhub owl-listener/ai-design-skills --plugin ai-alignment-reasoningThis skill uses the workspace's default tool permissions.
Harm anticipation is the practice of systematically thinking through how an AI product could cause harm — before it does. It's preventive design, not reactive crisis management.
Guides structured ethics, safety, and impact assessments: stakeholder mapping, harm/benefit analysis, fairness evaluation, mitigations, and monitoring for people-affecting decisions.
Conducts ethics reviews for AI and technology projects including ethical impact assessments, stakeholder analysis, and mitigation planning. Use for evaluating risks and harms.
Conducts pre-mortem analysis imagining catastrophic failures for uncommitted plans or posed scenarios against systems, via parallel lenses yielding prioritized risk registers with early warnings.
Share bugs, ideas, or general feedback.
Harm anticipation is the practice of systematically thinking through how an AI product could cause harm — before it does. It's preventive design, not reactive crisis management.
Work through each harm category systematically:
Think like an adversary:
Think about second-order effects: