From ai-privacy-governance-skills
Implements EU AI Act Arts. 13-14 and GDPR Arts. 13-14 transparency requirements for AI systems: user notifications, capability disclosures, limitations, and automated logic explanations. Useful for compliant AI apps.
npx claudepluginhub mukul975/privacy-data-protection-skills --plugin ai-privacy-governance-skillsThis skill uses the workspace's default tool permissions.
AI transparency operates at the intersection of two regulatory frameworks: the GDPR's data subject information rights (Arts. 13-14) and the EU AI Act's transparency obligations (Arts. 13-14, 50). Together they require controllers and deployers to provide meaningful, accessible information about AI system capabilities, limitations, decision logic, and personal data processing. This skill impleme...
Conducts multi-round deep research on GitHub repos via API and web searches, generating markdown reports with executive summaries, timelines, metrics, and Mermaid diagrams.
Dynamically discovers and combines enabled skills into cohesive, unexpected delightful experiences like interactive HTML or themed artifacts. Activates on 'surprise me', inspiration, or boredom cues.
Generates images from structured JSON prompts via Python script execution. Supports reference images and aspect ratios for characters, scenes, products, visuals.
AI transparency operates at the intersection of two regulatory frameworks: the GDPR's data subject information rights (Arts. 13-14) and the EU AI Act's transparency obligations (Arts. 13-14, 50). Together they require controllers and deployers to provide meaningful, accessible information about AI system capabilities, limitations, decision logic, and personal data processing. This skill implements the combined transparency framework, addressing both the technical explainability challenge of complex ML models and the legal obligation to communicate AI processing in plain language to affected individuals.
When personal data is processed by AI systems, data subjects must receive:
| Information Element | GDPR Article | AI-Specific Application |
|---|---|---|
| Purposes of processing | Art. 13(1)(c) / 14(1)(c) | Specific AI use case, not generic "service improvement" |
| Lawful basis | Art. 13(1)(c) / 14(1)(c) | The basis for AI training and for AI inference separately |
| Legitimate interest | Art. 13(1)(d) / 14(2)(b) | The specific interest served by AI processing |
| Recipients | Art. 13(1)(e) / 14(1)(e) | AI infrastructure providers, model hosting services |
| International transfers | Art. 13(1)(f) / 14(1)(f) | Where AI processing occurs (training and inference locations) |
| Retention period | Art. 13(2)(a) / 14(2)(a) | Training data retention, inference log retention, model lifecycle |
| Data subject rights | Art. 13(2)(b) / 14(2)(c) | Including AI-specific rights: explanation, contestation, human review |
| Automated decision-making | Art. 13(2)(f) / 14(2)(g) | Meaningful information about logic, significance, and envisaged consequences |
| Source of data | Art. 14(2)(f) | Training data sources (categories, not necessarily individual sources) |
This is the most challenging transparency requirement for AI systems. The EDPB and Article 29 Working Party have clarified:
What "meaningful information about the logic involved" requires:
What it does not require:
The EDPB recommends a layered transparency approach:
| Layer | Content | Delivery |
|---|---|---|
| Layer 1: Initial notice | AI is used in processing; general purpose; link to full information | At point of interaction (banner, tooltip, notification) |
| Layer 2: Summary | AI system description, key data used, decision logic summary, rights available | Privacy notice section, AI information page |
| Layer 3: Detailed information | Full technical description, training data categories, fairness measures, accuracy metrics, limitations | Supplementary documentation, upon request |
| Layer 4: Individual explanation | Specific factors influencing a particular decision, appeal mechanism | Upon request or automatically for significant decisions |
High-risk AI systems (Annex III) must be designed and developed to ensure:
| Requirement | Description |
|---|---|
| Interpretability | System design enables deployers to interpret outputs and use them appropriately |
| Instructions for use | Detailed documentation of capabilities, limitations, intended purpose, foreseeable misuse |
| Performance metrics | Accuracy levels, robustness metrics, known limitations for specific groups |
| Human oversight info | Description of human oversight measures and how to implement them |
| Input data specs | Description of input data the system was designed to process |
| Training data description | Relevant information about training data including provenance and preprocessing |
High-risk AI systems must be designed to enable effective human oversight:
| AI System Type | Transparency Obligation |
|---|---|
| AI interacting with persons | Inform that they are interacting with an AI system (unless obvious from context) |
| Emotion recognition / biometric categorisation | Inform about the system's operation and process personal data in compliance with GDPR |
| AI-generated or manipulated content (deepfakes) | Label content as AI-generated in a machine-readable format |
| AI-generated text on matters of public interest | Disclose that the text has been artificially generated or manipulated |
Controllers must inform natural persons that they are interacting with an AI system. This applies to:
Exceptions: where it is obvious from the circumstances and context that the person is interacting with AI (e.g., a robot in a factory setting).
For each deployed AI model, maintain a model card containing:
| Section | Content |
|---|---|
| Model overview | Name, version, type, developer, deployment date |
| Intended use | Specific purpose, target users, deployment context |
| Out-of-scope use | Uses the model is not designed for; foreseeable misuse |
| Training data summary | Data sources (categories), volume, temporal range, geographic scope, known biases |
| Performance metrics | Accuracy, precision, recall, F1 by relevant subgroup; fairness metrics |
| Limitations | Known failure modes, demographic performance disparities, edge cases |
| Privacy properties | Differential privacy applied (epsilon), membership inference test results, training data extraction risk |
| Human oversight | Level of oversight required, reviewer qualifications, override procedures |
| Update history | Retraining dates, data updates, performance changes |
Organisations operating multiple AI systems should maintain a central register:
| Field | Description |
|---|---|
| System ID | Unique identifier |
| System name | Human-readable name |
| AI Act classification | Unacceptable / High / Limited / Minimal |
| Purpose | Specific processing purpose |
| Data subjects affected | Categories and estimated numbers |
| Personal data processed | At training and inference |
| Decision authority | AI decision-support vs. automated decision |
| Transparency measures | Notification, explanation, documentation |
| Deployer | Internal / External deployment |
| Registration date | EU AI Act database registration (if high-risk) |
Techniques for providing Art. 13(2)(f) "meaningful information about the logic":
| Technique | Best For | Limitation |
|---|---|---|
| Feature importance (SHAP, LIME) | Identifying key variables | May oversimplify complex interactions |
| Decision rules extraction | Converting model logic to human-readable rules | Loss of accuracy for complex models |
| Partial dependence plots | Showing how features affect predictions | Assumes feature independence |
| Counterfactual explanations | Showing what change would lead to different outcome | Computationally expensive for many features |
| Attention visualisation | Transformer models — showing what the model focuses on | Attention does not always equal importance |
For Art. 22 right to explanation of individual decisions:
| Technique | Description | Use Case |
|---|---|---|
| LIME | Local Interpretable Model-agnostic Explanations | Any model — approximate local behaviour with interpretable model |
| SHAP values | Shapley Additive Explanations for individual predictions | Feature contribution to specific prediction |
| Counterfactual | "You were denied because X; if X were Y, outcome would be different" | Credit, hiring, insurance decisions |
| Anchors | Sufficient conditions for a prediction | Rule-based explanation of individual case |
| Concept-based | High-level concepts that influenced the decision | When features are not directly interpretable |