From ai-privacy-governance-skills
Implements GDPR Art. 22 automated decision-making and AI Act Art. 14 human oversight for AI systems. Covers identification of solely automated decisions, intervention design, logic explanations, and contestation procedures.
npx claudepluginhub mukul975/privacy-data-protection-skills --plugin ai-privacy-governance-skillsThis skill uses the workspace's default tool permissions.
GDPR Article 22 grants data subjects the right not to be subject to decisions based solely on automated processing, including profiling, which produce legal or similarly significant effects. The EU AI Act Art. 14 supplements this with specific human oversight design requirements for high-risk AI systems. Together, these provisions require organisations to identify when AI systems make consequen...
Conducts multi-round deep research on GitHub repos via API and web searches, generating markdown reports with executive summaries, timelines, metrics, and Mermaid diagrams.
Dynamically discovers and combines enabled skills into cohesive, unexpected delightful experiences like interactive HTML or themed artifacts. Activates on 'surprise me', inspiration, or boredom cues.
Generates images from structured JSON prompts via Python script execution. Supports reference images and aspect ratios for characters, scenes, products, visuals.
GDPR Article 22 grants data subjects the right not to be subject to decisions based solely on automated processing, including profiling, which produce legal or similarly significant effects. The EU AI Act Art. 14 supplements this with specific human oversight design requirements for high-risk AI systems. Together, these provisions require organisations to identify when AI systems make consequential decisions, ensure meaningful human intervention where required, provide explainable decision logic, and offer effective contestation mechanisms. This skill provides the complete framework for Art. 22 compliance and AI Act human oversight implementation.
Art. 22(1) is triggered only when all three conditions are met:
| Condition | Requirement | AI Application |
|---|---|---|
| 1. Decision | A decision is made (not merely a recommendation or input) | The AI output directly determines an outcome — no genuine human decision-making step between AI output and action |
| 2. Solely automated | Based solely on automated processing including profiling | No meaningful human intervention in the decision chain; rubber-stamping does not constitute human intervention |
| 3. Legal/significant effects | Produces legal effects or similarly significantly affects the data subject | Affects legal rights, contractual status, access to services, financial outcomes, or other significant life impacts |
The EDPB Guidelines 06/2020 on automated decision-making clarify:
Solely automated means no meaningful human involvement in the decision process
A human who merely confirms an AI recommendation without genuine assessment is not providing meaningful intervention
Meaningful human intervention requires:
Not solely automated when:
| Category | Examples | Significance |
|---|---|---|
| Legal effects | Contract formation/termination, legal obligation imposition, legal status determination | Directly affects legal rights |
| Access to services | Denial of credit, insurance, housing, education, employment | Significantly affects life circumstances |
| Financial impact | Pricing discrimination, benefit calculation, payment terms | Material financial consequences |
| Health and safety | Medical diagnosis prioritisation, emergency response triage | Potential physical harm |
| Freedom and autonomy | Surveillance scoring, movement restriction, content blocking | Affects fundamental freedoms |
Effects that are not similarly significant (per EDPB):
| Exception | Condition | Required Safeguards |
|---|---|---|
| Art. 22(2)(a) — Contract necessity | Decision is necessary for entering into or performance of a contract | Art. 22(3) safeguards required |
| Art. 22(2)(b) — Law authorisation | Authorised by Union or Member State law with suitable measures | Law must provide suitable safeguards |
| Art. 22(2)(c) — Explicit consent | Based on explicit consent | Art. 22(3) safeguards required |
When an Art. 22(2) exception is relied upon, the controller must implement at least:
Automated decisions based on Art. 9 special category data are only permitted under:
In both cases, suitable measures to safeguard data subject rights must be in place.
High-risk AI systems must be designed and developed so that they can be effectively overseen by natural persons during use:
| Requirement | Implementation |
|---|---|
| Understand capabilities and limitations | Documentation, training, model cards |
| Monitor operation | Real-time monitoring dashboards, alert systems |
| Detect anomalies and dysfunction | Drift detection, performance monitoring |
| Interpret outputs correctly | Confidence indicators, explanation tools |
| Override or reverse decisions | Override mechanism with authority chain |
| Intervene or stop the system | Emergency stop capability |
| Be aware of automation bias | Training on automation bias, countermeasures |
| Level | Description | Art. 22 Compliance | Appropriate When |
|---|---|---|---|
| Human-in-the-loop (HITL) | Human reviews every AI recommendation before decision | Fully compliant if review is meaningful | High-stakes individual decisions (hiring, credit, medical) |
| Human-on-the-loop (HOTL) | Human monitors AI decisions and can intervene | Compliant if intervention capability is genuine and exercised | Medium-risk decisions with effective monitoring |
| Human-in-command (HIC) | Human sets parameters and reviews outcomes periodically | May not satisfy Art. 22 — decision is solely automated | Low-risk bulk decisions with periodic audit |
| Fully autonomous | No human oversight of individual decisions | Art. 22 applies fully — exception needed | Only where Art. 22(2) exception applies with Art. 22(3) safeguards |
A human review qualifies as "meaningful intervention" when all criteria are met:
| Criterion | Test | Red Flag |
|---|---|---|
| Authority | Reviewer has formal authority to override AI | Reviewer can only escalate, not decide |
| Competence | Reviewer has domain expertise to evaluate the decision | Reviewer is a junior staff member without training |
| Information | Reviewer has access to all inputs, the AI output, and explanation | Reviewer sees only AI score with no context |
| Time | Sufficient time allocated for genuine consideration | Reviewer processes 200+ decisions per hour |
| Independence | Reviewer exercises genuine judgment | Override rate is < 1% suggesting rubber-stamping |
| Accountability | Reviewer is accountable for the decision | Accountability rests with the AI system owner, not reviewer |
| Element | Requirement |
|---|---|
| Accessibility | Contestation mechanism is easy to find, access, and use |
| Timeliness | Defined response timeframe (e.g., 30 days) |
| Qualified reviewer | Different from the original decision context; has authority to overturn |
| Information provision | Data subject receives explanation of decision factors and how to contest |
| Evidence consideration | Data subject can submit additional evidence and context |
| Written outcome | Decision on contestation is documented and communicated |
| Further appeal | If contestation is denied, path to DPA complaint or judicial remedy is indicated |
Any form of automated processing to evaluate personal aspects relating to a natural person, in particular to analyse or predict:
| Profiling Type | Risk Level | Art. 22 Trigger | Mitigation |
|---|---|---|---|
| Behavioural prediction (purchasing, browsing) | Medium | Only if decision with legal/significant effect | Opt-out, transparency |
| Credit scoring / financial risk | High | Yes — access to financial services | Human review, explanation, contestation |
| Health risk prediction | Very High | Yes — Art. 22(4) applies | Explicit consent, physician oversight |
| Criminal risk assessment | Very High | Yes — liberty and legal effects | Legal basis required, judicial oversight |
| Employment performance scoring | High | Yes — employment effects | HR human review, employee notification |
| Social scoring | Prohibited | N/A — AI Act Art. 5 prohibition | Do not implement |