Designs review workflows, checklists, and processes to detect and mitigate bias in AI outputs, including types of bias, detection methods, and mitigation strategies.
npx claudepluginhub owl-listener/ai-design-skills --plugin ai-alignment-reasoningThis skill uses the workspace's default tool permissions.
AI systems inherit biases from training data, amplify them through pattern-matching, and embed them in outputs that appear authoritative. Bias detection design creates the workflows, processes, and interfaces that help teams find and fix bias before users encounter it.
Structures AI/ML product planning with canvas for user problems, model/task selection, data needs, evaluation metrics, and responsible AI checks. For LLM integrations and AI features.
Validates AI/ML models and datasets for bias, fairness using Fairlearn/AIF360 metrics, four-fifths rule, severity classification, and ethics mapping.
Guides AI governance planning for ML systems, including EU AI Act risk classification, NIST AI RMF implementation, ethics frameworks, and compliance documentation.
Share bugs, ideas, or general feedback.
AI systems inherit biases from training data, amplify them through pattern-matching, and embed them in outputs that appear authoritative. Bias detection design creates the workflows, processes, and interfaces that help teams find and fix bias before users encounter it.
Bias detection is a team practice, not a one-time audit:
Finding bias is step one. Addressing it requires: