Provides patterns and guidelines for AI transparency including confidence indicators, source attribution, reasoning traces, and limitation disclosures. Useful for trustworthy AI UIs.
npx claudepluginhub owl-listener/ai-design-skills --plugin ai-alignment-reasoningThis skill uses the workspace's default tool permissions.
Transparency in AI products means making the system's knowledge, limitations, and confidence visible to users. It's how you build warranted trust — trust based on understanding, not blind faith.
Guides progressive disclosure of AI capabilities to align with user mental models, using strategies like on-demand hints, escalating examples, and layered revelation for AI product design.
Implements EU AI Act Arts. 13-14 and GDPR Arts. 13-14 transparency requirements for AI systems: user notifications, capability disclosures, limitations, and automated logic explanations. Useful for compliant AI apps.
Audits claims for epistemic honesty-humility: calibrates confidence to evidence, discloses gaps and limitations, resists overconfidence. Use before conclusions, partial knowledge answers, or decisions.
Share bugs, ideas, or general feedback.
Transparency in AI products means making the system's knowledge, limitations, and confidence visible to users. It's how you build warranted trust — trust based on understanding, not blind faith.
Too much transparency overwhelms. Too little erodes trust. Calibrate by: