AI governance and responsible AI planning including EU AI Act classification, NIST AI RMF, and AI ethics frameworks
Provides EU AI Act classification, NIST AI RMF implementation, and responsible AI governance frameworks for AI systems before development begins. Use when building AI systems requiring regulatory compliance, risk management, or ethical oversight.
/plugin marketplace add melodic-software/claude-code-plugins/plugin install compliance-planning@melodic-softwareThis skill is limited to using the following tools:
Comprehensive guidance for AI governance, regulatory compliance, and responsible AI practices before development begins.
The EU AI Act categorizes AI systems by risk level:
┌─────────────────────────────────────────────────────┐
│ UNACCEPTABLE RISK (Prohibited) │
│ • Social scoring by governments │
│ • Subliminal manipulation │
│ • Exploitation of vulnerabilities │
│ • Real-time biometric ID in public (exceptions) │
├─────────────────────────────────────────────────────┤
│ HIGH RISK (Strict Requirements) │
│ • Biometric identification │
│ • Critical infrastructure management │
│ • Education/vocational training access │
│ • Employment, worker management, recruitment │
│ • Essential services access (credit, insurance) │
│ • Law enforcement │
│ • Migration, asylum, border control │
│ • Justice and democratic processes │
├─────────────────────────────────────────────────────┤
│ LIMITED RISK (Transparency Obligations) │
│ • Chatbots (must disclose AI interaction) │
│ • Emotion recognition systems │
│ • Biometric categorization │
│ • Deepfakes (must label as generated) │
├─────────────────────────────────────────────────────┤
│ MINIMAL RISK (No Specific Requirements) │
│ • AI-enabled video games │
│ • Spam filters │
│ • Inventory management │
└─────────────────────────────────────────────────────┘
| Requirement | Description | Implementation |
|---|---|---|
| Risk Management | Continuous risk management system | Risk assessment process |
| Data Governance | Training data quality and governance | Data documentation |
| Technical Documentation | Detailed system documentation | System spec + model cards |
| Record Keeping | Automatic logging of events | Audit trail |
| Transparency | Clear instructions for users | User documentation |
| Human Oversight | Ability for human intervention | HITL mechanisms |
| Accuracy, Robustness, Cybersecurity | Appropriate performance levels | Testing + monitoring |
public class EuAiActClassifier
{
public AiRiskCategory Classify(AiSystemDescription system)
{
// Check for prohibited practices first
if (IsProhibited(system))
return AiRiskCategory.Unacceptable;
// Check Annex III high-risk categories
if (IsHighRiskCategory(system))
return AiRiskCategory.High;
// Check Annex I product safety legislation
if (RequiresProductConformity(system))
return AiRiskCategory.High;
// Check for transparency obligations
if (RequiresTransparency(system))
return AiRiskCategory.Limited;
return AiRiskCategory.Minimal;
}
private bool IsProhibited(AiSystemDescription system)
{
return system.UseCases.Any(uc =>
uc.Type == UseCaseType.SocialScoring ||
uc.Type == UseCaseType.SubliminalManipulation ||
uc.Type == UseCaseType.VulnerabilityExploitation ||
(uc.Type == UseCaseType.RealTimeBiometricId &&
uc.Context == Context.PublicSpace &&
!uc.HasLawEnforcementException));
}
private bool IsHighRiskCategory(AiSystemDescription system)
{
var highRiskCategories = new[]
{
Domain.BiometricIdentification,
Domain.CriticalInfrastructure,
Domain.EducationAccess,
Domain.Employment,
Domain.EssentialServices,
Domain.LawEnforcement,
Domain.MigrationAsylum,
Domain.JusticeProcesses
};
return highRiskCategories.Contains(system.Domain);
}
}
public enum AiRiskCategory
{
Unacceptable,
High,
Limited,
Minimal
}
┌─────────────────────────────────────────────────────────────┐
│ GOVERN │
│ Culture, policies, accountability structures │
│ ↓ │
├─────────────────────────────────────────────────────────────┤
│ MAP MEASURE MANAGE │
│ Context & risks → Assess risks → Prioritize & │
│ identification & impacts mitigate │
└─────────────────────────────────────────────────────────────┘
Establish AI governance structures:
## AI Governance Structure
### Roles and Responsibilities
| Role | Responsibilities |
|------|------------------|
| AI Governance Board | Strategic oversight, policy approval |
| AI Ethics Officer | Ethics review, bias assessment |
| AI Risk Manager | Risk identification, mitigation tracking |
| Model Owner | Lifecycle management, performance |
| Technical Lead | Implementation, testing, monitoring |
### Policies Required
- [ ] AI Development Policy
- [ ] Model Risk Management Policy
- [ ] AI Ethics Guidelines
- [ ] Data Quality Standards
- [ ] Human Oversight Requirements
- [ ] Incident Response for AI Failures
Understand context and identify risks:
public class AiContextMapping
{
public record AiSystemContext
{
public required string SystemName { get; init; }
public required string Purpose { get; init; }
public required List<string> Stakeholders { get; init; }
public required List<string> ImpactedGroups { get; init; }
public required string DecisionType { get; init; } // Augment vs Automate
public required bool InvolvesVulnerablePopulations { get; init; }
public required List<string> PotentialHarms { get; init; }
public required List<string> PotentialBenefits { get; init; }
public required List<string> LegalConstraints { get; init; }
}
public ContextAssessment Assess(AiSystemContext context)
{
var risks = new List<IdentifiedRisk>();
// Assess stakeholder impacts
foreach (var group in context.ImpactedGroups)
{
risks.Add(new IdentifiedRisk
{
Category = "Stakeholder Impact",
Description = $"Potential impact on {group}",
Severity = context.InvolvesVulnerablePopulations
? RiskSeverity.High
: RiskSeverity.Medium
});
}
// Assess potential harms
foreach (var harm in context.PotentialHarms)
{
risks.Add(new IdentifiedRisk
{
Category = "Potential Harm",
Description = harm,
Severity = DetermineHarmSeverity(harm)
});
}
return new ContextAssessment
{
Context = context,
IdentifiedRisks = risks,
RecommendedMitigations = GenerateMitigations(risks)
};
}
}
Assess and analyze AI risks:
## Risk Assessment Framework
### Trustworthiness Characteristics
| Characteristic | Assessment Questions |
|----------------|---------------------|
| **Valid & Reliable** | Does the system perform as intended? Are results consistent? |
| **Safe** | Can the system cause harm? Are safety controls adequate? |
| **Secure & Resilient** | Is the system protected from attacks? Can it recover? |
| **Accountable & Transparent** | Can we explain decisions? Is there clear ownership? |
| **Explainable & Interpretable** | Can users understand outputs? Can we audit decisions? |
| **Privacy-Enhanced** | Is personal data protected? Is data minimization applied? |
| **Fair (Bias Managed)** | Are outcomes equitable? Is bias detected and mitigated? |
### Measurement Metrics
| Metric | Description | Target |
|--------|-------------|--------|
| Accuracy | Correct predictions/total | ≥95% |
| Fairness Gap | Max disparity across groups | ≤5% |
| Explainability | Decisions with explanations | 100% |
| Incident Rate | Failures per 1000 uses | <1 |
| Human Override Rate | Manual interventions | Track |
Prioritize and implement mitigations:
public class AiRiskManagement
{
public record RiskMitigation
{
public required string RiskId { get; init; }
public required string Risk { get; init; }
public required RiskSeverity Severity { get; init; }
public required List<string> Controls { get; init; }
public required string Owner { get; init; }
public required DateTimeOffset DueDate { get; init; }
public required MitigationStatus Status { get; init; }
}
public RiskTreatmentPlan CreateTreatmentPlan(
IEnumerable<IdentifiedRisk> risks)
{
var prioritized = risks
.OrderByDescending(r => r.Severity)
.ThenByDescending(r => r.Likelihood);
var plan = new RiskTreatmentPlan();
foreach (var risk in prioritized)
{
var treatment = DetermineTreatment(risk);
switch (treatment)
{
case TreatmentType.Avoid:
plan.Avoidances.Add(CreateAvoidanceAction(risk));
break;
case TreatmentType.Mitigate:
plan.Mitigations.Add(CreateMitigationPlan(risk));
break;
case TreatmentType.Transfer:
plan.Transfers.Add(CreateTransferPlan(risk));
break;
case TreatmentType.Accept:
plan.Acceptances.Add(CreateAcceptanceRecord(risk));
break;
}
}
return plan;
}
}
| Principle | Description | Implementation |
|---|---|---|
| Fairness | Equitable treatment, non-discrimination | Bias testing, fairness metrics |
| Transparency | Clear about AI use and limitations | Disclosure, explanations |
| Accountability | Clear ownership and responsibility | Governance, audit trails |
| Privacy | Protect personal data | Data minimization, consent |
| Safety | Prevent harm | Testing, guardrails, monitoring |
| Human Control | Meaningful human oversight | HITL, override capabilities |
# Model Card: [Model Name]
## Model Details
- **Developer**: [Organization]
- **Version**: [X.Y.Z]
- **Type**: [Classification/Regression/Generation/etc.]
- **Framework**: [PyTorch/TensorFlow/etc.]
- **License**: [License]
## Intended Use
- **Primary Use Cases**: [List]
- **Users**: [Target users]
- **Out-of-Scope Uses**: [Prohibited or unsupported uses]
## Training Data
- **Dataset**: [Name and source]
- **Size**: [Number of examples]
- **Features**: [Key features used]
- **Preprocessing**: [Steps applied]
- **Known Limitations**: [Data gaps or biases]
## Evaluation
- **Metrics**: [Accuracy, F1, etc.]
- **Test Data**: [Holdout set description]
- **Results**: [Performance numbers]
- **Fairness Evaluation**: [Disaggregated metrics]
## Ethical Considerations
- **Sensitive Use Cases**: [If applicable]
- **Potential Misuse**: [Risks]
- **Mitigations**: [Controls in place]
## Limitations and Recommendations
- **Known Limitations**: [Model weaknesses]
- **Recommendations**: [Best practices for users]
## Algorithmic Impact Assessment
### 1. System Description
- **Name**: [System name]
- **Purpose**: [Business objective]
- **Decision Type**: [What decisions it informs/makes]
- **Affected Parties**: [Who is impacted]
### 2. Data Assessment
- **Data Sources**: [Origin of data]
- **Personal Data**: [Types collected]
- **Sensitive Attributes**: [Protected characteristics]
- **Historical Bias Risk**: [Assessment]
### 3. Impact Assessment
#### Positive Impacts
| Impact | Beneficiary | Magnitude |
|--------|-------------|-----------|
| [Impact] | [Group] | [High/Med/Low] |
#### Negative Impacts
| Impact | Affected Group | Magnitude | Mitigation |
|--------|----------------|-----------|------------|
| [Impact] | [Group] | [H/M/L] | [Action] |
### 4. Fairness Assessment
- **Protected Groups Analyzed**: [List]
- **Fairness Metrics Used**: [Demographic parity, etc.]
- **Disparities Found**: [Results]
- **Remediation Plan**: [Actions]
### 5. Human Oversight
- **Oversight Level**: [Full automation / Human-in-the-loop / Human-on-the-loop]
- **Override Mechanism**: [How humans can intervene]
- **Escalation Path**: [When to escalate]
### 6. Monitoring Plan
- **Performance Metrics**: [What to track]
- **Fairness Metrics**: [Ongoing monitoring]
- **Review Frequency**: [Cadence]
- **Trigger Thresholds**: [When to investigate]
### 7. Approval
| Role | Name | Approval | Date |
|------|------|----------|------|
| Model Owner | | [ ] | |
| AI Ethics | | [ ] | |
| Legal | | [ ] | |
| Business | | [ ] | |
bias-assessment skillexplainability-planning skillgdpr-compliance for data protectionethics-review for ethical assessmentThis skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.