Manages GDPR Article 22 rights for automated decision-making and profiling: identifies decisions, implements human oversight, explains logic, enables contestation. For AI decision queries.
npx claudepluginhub mukul975/privacy-data-protection-skills --plugin privacy-skills-completeThis skill uses the workspace's default tool permissions.
GDPR Article 22 provides data subjects with the right not to be subject to decisions based solely on automated processing, including profiling, which produce legal effects concerning them or similarly significantly affect them. This skill covers the identification of automated decision-making, implementation of meaningful human intervention, explanation of logic, and contestation procedures.
Conducts multi-round deep research on GitHub repos via API and web searches, generating markdown reports with executive summaries, timelines, metrics, and Mermaid diagrams.
Dynamically discovers and combines enabled skills into cohesive, unexpected delightful experiences like interactive HTML or themed artifacts. Activates on 'surprise me', inspiration, or boredom cues.
Generates images from structured JSON prompts via Python script execution. Supports reference images and aspect ratios for characters, scenes, products, visuals.
GDPR Article 22 provides data subjects with the right not to be subject to decisions based solely on automated processing, including profiling, which produce legal effects concerning them or similarly significantly affect them. This skill covers the identification of automated decision-making, implementation of meaningful human intervention, explanation of logic, and contestation procedures.
Art. 22(1) — The data subject has the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning them or similarly significantly affects them.
Art. 22(2) — Exceptions: Art. 22(1) does not apply if the decision:
Art. 22(3) — Where exceptions (a) or (c) apply, the controller must implement suitable measures to safeguard the data subject's rights and freedoms and legitimate interests, at least the right to:
Art. 22(4) — Decisions under Art. 22(2) shall not be based on special categories of data under Art. 9(1) unless Art. 9(2)(a) or (g) applies and suitable measures to safeguard the data subject's rights and freedoms and legitimate interests are in place.
"Profiling" means any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person's performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location, or movements.
The Article 29 Working Party (now EDPB) Guidelines adopted on 6 February 2018 provide authoritative interpretation, distinguishing:
For each automated processing activity, assess:
Is the decision solely automated? No meaningful human intervention in the decision process. Per WP251 rev.01, human involvement must be more than a rubber stamp — the reviewer must have authority, competence, and genuinely consider the automated output before reaching their own decision.
Does the decision produce legal effects? Examples: denial of a loan application, termination of a contract, refusal of social security benefit, denial of entry to a country.
Does the decision similarly significantly affect the data subject? Examples: automatic rejection of an online credit application, automated recruitment screening that excludes candidates, differential pricing that materially affects purchasing power, automated insurance risk assessment resulting in premium increases exceeding 20%.
| Processing Activity | Solely Automated | Legal/Significant Effect | Art. 22 Applies | Exception |
|---|---|---|---|---|
| Client risk scoring for onboarding | Yes | Yes — determines service access | Yes | Art. 22(2)(a) — Necessary for contract |
| Anomaly detection in usage patterns | Yes | No — triggers human review only | No | N/A |
| Automated invoice processing | Yes | No — administrative function | No | N/A |
| Marketing segment assignment | Yes | No — does not produce legal or similarly significant effects | No | N/A |
| Fraud probability scoring | Yes | Yes — may result in account suspension | Yes | Art. 22(2)(a) — Necessary for contract |
Per WP251 rev.01, paragraph 21, human intervention must meet all of the following criteria:
For each Art. 22 decision:
When a data subject exercises their right of access regarding automated decision-making, the controller must provide:
"Meridian Analytics Ltd uses an automated risk scoring system to assess new client applications. The system evaluates the following factors:
- Company registration data: Age of the company, registered jurisdiction, and filing history (weighted approximately 30% of the overall score).
- Financial indicators: Reported revenue, credit reference agency data, and payment history from public sources (weighted approximately 40%).
- Industry risk classification: The sector in which the applicant operates, mapped against a regulatory risk index (weighted approximately 20%).
- Behavioural signals: Patterns in the application process itself, such as consistency of provided information (weighted approximately 10%).
The system produces a risk score from 0 to 100. Applications scoring below 35 are automatically flagged for enhanced due diligence review by a human analyst. Applications scoring below 15 are automatically declined, subject to review by a Senior Compliance Analyst within 48 hours.
This system has an overall accuracy rate of 91.3% based on quarterly validation against actual client outcomes. The false positive rate (incorrectly flagging low-risk clients as high-risk) is 6.2%, and the false negative rate (failing to flag genuinely high-risk clients) is 2.5%.
If your application is declined or flagged, you have the right to request human review, express your point of view, and contest the decision."