Fairness assessment skill using Fairlearn for bias detection, mitigation, and compliance reporting.
Analyzes machine learning models for bias and generates fairness reports with mitigation recommendations.
npx claudepluginhub a5c-ai/babysitterThis skill is limited to using the following tools:
README.mdFairness assessment skill using Fairlearn for bias detection, mitigation, and compliance reporting in ML models.
{
"type": "object",
"required": ["modelPath", "dataPath", "sensitiveFeatures"],
"properties": {
"modelPath": {
"type": "string",
"description": "Path to the trained model"
},
"dataPath": {
"type": "string",
"description": "Path to evaluation data"
},
"sensitiveFeatures": {
"type": "array",
"items": { "type": "string" },
"description": "Column names of sensitive attributes"
},
"labelColumn": {
"type": "string",
"description": "Name of the target/label column"
},
"assessmentConfig": {
"type": "object",
"properties": {
"metrics": {
"type": "array",
"items": {
"type": "string",
"enum": ["demographic_parity", "equalized_odds", "true_positive_rate", "false_positive_rate", "accuracy"]
}
},
"threshold": { "type": "number" }
}
},
"mitigationConfig": {
"type": "object",
"properties": {
"method": {
"type": "string",
"enum": ["threshold_optimizer", "exponentiated_gradient", "grid_search", "reductions"]
},
"constraint": { "type": "string" },
"gridSize": { "type": "integer" }
}
}
}
}
{
"type": "object",
"required": ["status", "assessment"],
"properties": {
"status": {
"type": "string",
"enum": ["success", "error"]
},
"assessment": {
"type": "object",
"properties": {
"overallMetrics": { "type": "object" },
"groupMetrics": {
"type": "array",
"items": {
"type": "object",
"properties": {
"group": { "type": "string" },
"count": { "type": "integer" },
"metrics": { "type": "object" }
}
}
},
"disparityMetrics": {
"type": "object",
"properties": {
"demographicParityDiff": { "type": "number" },
"equalizedOddsDiff": { "type": "number" }
}
},
"fairnessScore": { "type": "number" }
}
},
"mitigation": {
"type": "object",
"properties": {
"method": { "type": "string" },
"improvedModel": { "type": "string" },
"beforeMetrics": { "type": "object" },
"afterMetrics": { "type": "object" }
}
},
"complianceReport": {
"type": "string",
"description": "Path to generated compliance report"
}
}
}
{
kind: 'skill',
title: 'Assess model fairness',
skill: {
name: 'fairlearn-bias-detector',
context: {
modelPath: 'models/loan_model.pkl',
dataPath: 'data/test.csv',
sensitiveFeatures: ['gender', 'race'],
labelColumn: 'approved',
assessmentConfig: {
metrics: ['demographic_parity', 'equalized_odds'],
threshold: 0.8
},
mitigationConfig: {
method: 'threshold_optimizer',
constraint: 'demographic_parity'
}
}
}
}
Activates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
Search, retrieve, and install Agent Skills from the prompts.chat registry using MCP tools. Use when the user asks to find skills, browse skill catalogs, install a skill for Claude, or extend Claude's capabilities with reusable AI agent components.
This skill should be used when the user wants to "create a skill", "add a skill to plugin", "write a new skill", "improve skill description", "organize skill content", or needs guidance on skill structure, progressive disclosure, or skill development best practices for Claude Code plugins.