Determines controller-processor relationships for AI services like SaaS, embedded, and API-based, and conducts privacy due diligence per GDPR Art. 28 frameworks.
npx claudepluginhub mukul975/privacy-data-protection-skills --plugin privacy-skills-completeThis skill uses the workspace's default tool permissions.
AI services create complex controller-processor relationships that differ significantly from traditional data processing arrangements. Whether an AI vendor is a processor, joint controller, or independent controller depends on the degree of autonomy the vendor has over personal data processing — particularly regarding model training on customer data, data retention for model improvement, and th...
Conducts multi-round deep research on GitHub repos via API and web searches, generating markdown reports with executive summaries, timelines, metrics, and Mermaid diagrams.
Dynamically discovers and combines enabled skills into cohesive, unexpected delightful experiences like interactive HTML or themed artifacts. Activates on 'surprise me', inspiration, or boredom cues.
Generates images from structured JSON prompts via Python script execution. Supports reference images and aspect ratios for characters, scenes, products, visuals.
AI services create complex controller-processor relationships that differ significantly from traditional data processing arrangements. Whether an AI vendor is a processor, joint controller, or independent controller depends on the degree of autonomy the vendor has over personal data processing — particularly regarding model training on customer data, data retention for model improvement, and the vendor's independent purposes for the data. This skill provides the framework for determining controller-processor roles in AI service relationships, conducting privacy due diligence on AI vendors, and establishing appropriate contractual protections.
| AI Service Model | Typical Role | Key Factors | GDPR Article |
|---|---|---|---|
| SaaS AI — Customer data processed per instructions | Vendor = Processor | Vendor processes data solely on controller's instructions; no independent use | Art. 28 DPA required |
| SaaS AI — Customer data used for model training | Vendor = Joint Controller or Independent Controller | Vendor uses customer data for own model improvement beyond contracted service | Art. 26 JCA or separate controller notice |
| Embedded AI — Pre-trained model in customer infrastructure | Customer = Controller; Vendor = may be processor for support | Model runs in customer environment; vendor may access data for support/updates | Art. 28 if vendor accesses data |
| API-based AI — Customer sends data for inference | Vendor = Processor (if no data retention) or Joint Controller (if training on inputs) | Depends on whether vendor retains, uses, or trains on input data | Assessment required |
| AI Platform — Customer builds models on vendor platform | Vendor = Processor for infrastructure; Controller for platform data | Vendor provides compute; customer controls data and model | Art. 28 DPA + audit rights |
| AI Marketplace — Pre-built models with customer data | Depends on data flow | If customer data enters vendor model → joint controller assessment | Case-by-case |
For each AI vendor, document:
| Element | Documentation Required |
|---|---|
| AI capabilities | What AI functions does the vendor provide? |
| Personal data inputs | What personal data is sent to the vendor? |
| Personal data outputs | What personal data does the vendor return? |
| Data retention | Does the vendor retain input data? For how long? |
| Model training | Does the vendor train on customer data? |
| Sub-processors | Does the vendor use sub-processors for AI processing? Where? |
| Data location | Where is AI processing performed? What jurisdictions? |
| Security measures | What security controls protect data during AI processing? |
| Human review | Does vendor personnel access customer data? |
| Risk Factor | Assessment | Risk Level |
|---|---|---|
| Data sensitivity | Special category data sent to AI vendor? | |
| Data volume | Volume of personal data processed by vendor | |
| Vendor data use | Vendor uses customer data for own purposes? | |
| International transfers | Data processed outside EU/EEA? | |
| Sub-processor chain | Number and location of sub-processors | |
| Security posture | Certifications (ISO 27001, SOC 2)? | |
| Incident history | Prior data breaches or enforcement actions? | |
| AI-specific risks | Model memorization, output leakage, bias? |
| Contractual Element | Required? | Status |
|---|---|---|
| Art. 28 DPA or Art. 26 JCA | Yes | |
| Processing scope and purpose limitation | Yes | |
| Prohibition on data use beyond instructions (if processor) | Yes | |
| Model training opt-out | Yes (if vendor trains on data) | |
| Sub-processor notification and approval | Yes | |
| International transfer safeguards | If applicable | |
| Data deletion on termination | Yes | |
| Audit rights | Yes | |
| Breach notification obligations | Yes | |
| AI-specific: model privacy testing | Recommended | |
| AI-specific: bias assessment obligations | Recommended for high-risk | |
| AI-specific: output accuracy warranties | Recommended |
The Processor shall not use Customer Data to train, improve, fine-tune, or
otherwise develop any machine learning model, algorithm, or AI system,
whether for the Customer's benefit or for any other purpose, without prior
written consent from the Customer. Any consent granted shall specify the
scope of permitted training, the data categories involved, and the privacy
safeguards to be applied.
The Provider acknowledges that AI system outputs about identifiable data
subjects must comply with the accuracy principle under Art. 5(1)(d) GDPR.
The Provider shall implement measures to minimise inaccurate outputs about
data subjects and shall promptly correct inaccurate outputs upon
notification.
The Provider shall conduct periodic privacy audits of AI models processing
Customer Data, including membership inference testing and training data
extraction testing, and shall make results available to the Customer upon
request. The Provider shall monitor AI systems for discriminatory outcomes
and shall implement bias mitigation measures as required.