From privacy-impact-assessment-skills
Guides DPIA and EU AI Act conformity assessments for AI systems processing personal data. Covers GDPR Art. 6/9/22, training data lawfulness, automated decision-making, bias detection, and NIST AI RMF.
npx claudepluginhub mukul975/privacy-data-protection-skills --plugin privacy-impact-assessment-skillsThis skill uses the workspace's default tool permissions.
AI systems that process personal data require a combined privacy and conformity assessment addressing both GDPR obligations and the EU AI Act (Regulation 2024/1689). This skill integrates the GDPR Art. 35 DPIA framework with AI-specific risk assessment, encompassing training data lawfulness, Art. 22 automated decision-making implications, algorithmic fairness, and the NIST AI Risk Management Fr...
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
AI systems that process personal data require a combined privacy and conformity assessment addressing both GDPR obligations and the EU AI Act (Regulation 2024/1689). This skill integrates the GDPR Art. 35 DPIA framework with AI-specific risk assessment, encompassing training data lawfulness, Art. 22 automated decision-making implications, algorithmic fairness, and the NIST AI Risk Management Framework MAP function. The assessment methodology draws from the EDPB-EDPS Joint Opinion 5/2021 on the AI Act proposal and subsequent EDPB Guidelines 06/2025 on AI and data protection.
| Provision | Application to AI Systems |
|---|---|
| Art. 5(1)(a) — Lawfulness, fairness, transparency | AI processing must have a lawful basis; the logic of AI decisions must be explainable to data subjects |
| Art. 5(1)(b) — Purpose limitation | Training data collected for one purpose cannot be used to train AI models for an incompatible purpose without further lawful basis |
| Art. 5(1)(c) — Data minimisation | AI models should not require more personal data than necessary; synthetic data and anonymisation should be considered |
| Art. 5(1)(d) — Accuracy | AI outputs affecting individuals must be accurate; model drift must be monitored |
| Art. 6(1) — Lawful basis | Each stage of AI processing (data collection, model training, inference, output use) requires a lawful basis |
| Art. 9 — Special categories | Training on health, biometric, genetic, racial, political, religious, sexual orientation, or trade union data requires an Art. 9(2) exemption |
| Art. 13-14 — Transparency | Privacy notices must disclose the existence of automated decision-making, meaningful information about the logic involved, and the significance and envisaged consequences |
| Art. 22 — Automated decision-making | Data subjects have the right not to be subject to decisions based solely on automated processing that produce legal effects or similarly significantly affect them, with exceptions under Art. 22(2) |
| Art. 25 — Data protection by design | AI systems must embed privacy protections from the design phase: privacy-preserving ML techniques, differential privacy, federated learning |
| Art. 35 — DPIA | AI systems meeting EDPB WP248rev.01 criteria (evaluation/scoring, automated decision-making, innovative technology) require a DPIA |
| Risk Category | Description | AI Act Requirements | GDPR Overlap |
|---|---|---|---|
| Unacceptable Risk (Art. 5) | AI practices prohibited outright: social scoring by public authorities, real-time remote biometric identification in public spaces (with exceptions), emotion recognition in workplace/education, untargeted scraping for facial recognition databases | Prohibited — cannot be deployed | Art. 9 special categories, Art. 35 DPIA |
| High Risk (Annex III) | AI systems in listed areas: biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, administration of justice | Conformity assessment, risk management system, data governance, transparency, human oversight, accuracy/robustness/cybersecurity | Art. 22, Art. 35 DPIA, Art. 25 DPbD |
| Limited Risk (Art. 50) | AI systems with specific transparency obligations: chatbots, emotion recognition, deep fakes | Transparency obligations (notify users they are interacting with AI) | Art. 13-14 transparency |
| Minimal Risk | All other AI systems | No specific AI Act obligations; voluntary codes of practice | Standard GDPR compliance |
Key recommendations from the Joint Opinion on the AI Act proposal:
Determine the AI system's risk category under the AI Act:
| Assessment Question | Requirement |
|---|---|
| Was the training data collected with a lawful basis under Art. 6(1)? | Each data source must have an identified lawful basis |
| Is the use of data for AI training compatible with the original collection purpose? | Art. 6(4) compatibility assessment or Art. 5(1)(b) further processing analysis |
| Was consent obtained for AI training specifically? | If relying on Art. 6(1)(a), consent must be specific, informed, and freely given for the AI training purpose |
| If using legitimate interest, has a balancing test been conducted? | Art. 6(1)(f) requires documented LIA including AI-specific impacts |
| Does training data include special categories? | Art. 9(2) exemption required; Art. 9(2)(j) scientific research may apply with safeguards |
| Assessment Area | Requirements |
|---|---|
| Representativeness | Training data must be representative of the population the AI system will be applied to. Underrepresentation of demographic groups must be identified and addressed. |
| Label accuracy | If supervised learning, labels must be accurate and free from historical bias. Human labellers must be trained on anti-discrimination principles. |
| Temporal validity | Training data must reflect current conditions. Stale training data can produce discriminatory outputs. |
| Proxy variables | Identify features that may serve as proxies for protected characteristics (postcode as proxy for ethnicity, name as proxy for gender). |
| Data provenance | Document the source, collection methodology, and processing history for all training data. |
Does the AI system make decisions about individuals?
├─ NO → Art. 22 does not apply.
└─ YES → Continue.
│
Are decisions based solely on automated processing?
├─ NO → Art. 22(1) does not apply, but Art. 13-14 transparency still applies.
│ (Note: "meaningful human involvement" must be genuine, not rubber-stamping.)
└─ YES → Continue.
│
Do decisions produce legal effects or similarly significantly affect the individual?
├─ NO → Art. 22(1) does not apply.
└─ YES → Art. 22(1) applies. The individual has the right not to be subject
to the decision unless an Art. 22(2) exception applies.
| Exception | Requirements |
|---|---|
| Art. 22(2)(a) — Necessary for contract | Decision must be necessary for entering into or performing a contract with the data subject |
| Art. 22(2)(b) — Authorised by law | Union or Member State law must authorise the decision and provide suitable safeguards |
| Art. 22(2)(c) — Explicit consent | Data subject has given explicit consent to the automated decision |
Even where an Art. 22(2) exception applies, the controller must implement suitable measures including:
| Metric | Description | Application |
|---|---|---|
| Demographic parity | Positive outcome rates should be equal across protected groups | Credit scoring, hiring |
| Equalized odds | True positive and false positive rates should be equal across groups | Criminal risk assessment, fraud detection |
| Predictive parity | Positive predictive value should be equal across groups | Medical diagnosis, recidivism prediction |
| Individual fairness | Similar individuals should receive similar outcomes | Loan pricing, insurance premium calculation |
| Counterfactual fairness | Outcome should not change if only the protected characteristic changes | Any decision-making system |
The NIST AI Risk Management Framework (AI RMF 1.0, January 2023) MAP function identifies the context, capabilities, and potential impacts of AI systems. Integrate the following MAP subcategories:
| Subcategory | Assessment Action |
|---|---|
| MAP 1.1 | Document the intended purpose, context of use, and deployment environment |
| MAP 1.2 | Document interdependent and interconnected systems |
| MAP 1.5 | Identify intended users, affected individuals, and stakeholders |
| MAP 1.6 | Assess impacts on individuals, groups, communities, organisations, and society |
| MAP 2.1 | Establish the AI system's knowledge limits and conditions where it may fail |
| MAP 2.2 | Document scientific integrity of training and testing methodologies |
| MAP 2.3 | Assess environmental impact of AI system training and deployment |
| MAP 3.1 | Document potential benefits of the AI system |
| MAP 3.2 | Document potential costs, risks, and negative impacts |
| MAP 3.4 | Map risks specifically to affected communities and stakeholders |
| MAP 3.5 | Document likelihood and severity of identified risks |
| MAP 5.1 | Engage with diverse stakeholders and affected communities |
| MAP 5.2 | Engage with domain experts, AI practitioners, and sociotechnical experts |
| Risk Category | Description | Mitigation Approach |
|---|---|---|
| Model inversion | Attacker reconstructs training data from model outputs | Differential privacy during training, output perturbation, access controls on model API |
| Membership inference | Attacker determines whether a specific individual's data was in the training set | Regularisation, differential privacy, limiting model confidence scores |
| Data poisoning | Malicious manipulation of training data to bias model outputs | Training data provenance verification, anomaly detection, robust training techniques |
| Concept drift | Model accuracy degrades over time as real-world data distribution changes | Continuous monitoring, automated retraining triggers, human review of edge cases |
| Explanation manipulation | Gaming of AI explanations to hide discriminatory factors | Multiple explanation methods, adversarial testing of explanation consistency |
| Feedback loops | AI decisions create data that reinforces existing biases | Regular bias auditing, human-in-the-loop review, outcome monitoring by demographic group |