From ai-privacy-governance-skills
Provides pre-deployment privacy compliance checklist for AI/ML systems verifying DPIA completion, lawful basis, transparency notices, human oversight, bias testing, and monitoring setup. Use as go-live gate.
npx claudepluginhub mukul975/privacy-data-protection-skills --plugin ai-privacy-governance-skillsThis skill uses the workspace's default tool permissions.
Deploying an AI system that processes personal data requires verification of privacy compliance across multiple dimensions before the system goes live. This checklist serves as a compliance gate in the Cerebrum AI Labs ML deployment pipeline. No AI system may be deployed to production until all mandatory items are verified and signed off by the Data Protection Officer (DPO). The checklist is st...
Conducts multi-round deep research on GitHub repos via API and web searches, generating markdown reports with executive summaries, timelines, metrics, and Mermaid diagrams.
Dynamically discovers and combines enabled skills into cohesive, unexpected delightful experiences like interactive HTML or themed artifacts. Activates on 'surprise me', inspiration, or boredom cues.
Generates images from structured JSON prompts via Python script execution. Supports reference images and aspect ratios for characters, scenes, products, visuals.
Deploying an AI system that processes personal data requires verification of privacy compliance across multiple dimensions before the system goes live. This checklist serves as a compliance gate in the Cerebrum AI Labs ML deployment pipeline. No AI system may be deployed to production until all mandatory items are verified and signed off by the Data Protection Officer (DPO). The checklist is structured around GDPR requirements, the EU AI Act obligations (for high-risk systems), and internal governance standards.
| Check | Requirement | Status | Evidence |
|---|---|---|---|
| Lawful basis documented | Art. 6(1) basis identified and recorded for all personal data processing | Required | LIA or consent records |
| Special categories assessed | Art. 9 data identified; explicit consent or Art. 9(2) exception documented | Required | Data classification report |
| DPIA completed | Art. 35 DPIA completed for high-risk processing (profiling, systematic monitoring, large-scale special categories) | Required if applicable | DPIA document signed by DPO |
| DPIA risks mitigated | All high/critical risks from DPIA have documented mitigations | Required | Risk treatment plan |
| Prior consultation | Art. 36 consultation with supervisory authority if residual risk remains high | Required if applicable | Consultation record |
| Legitimate interest assessment | If relying on Art. 6(1)(f), LIA balancing test completed | Required if LI basis | LIA document |
| Check | Requirement | Status | Evidence |
|---|---|---|---|
| Privacy notice updated | Art. 13-14 information includes AI processing details | Required | Updated privacy notice |
| Logic described | "Meaningful information about the logic involved" documented for data subjects | Required for automated decisions | Explanation document |
| Significance disclosed | Envisaged consequences of AI processing disclosed | Required for automated decisions | Privacy notice section |
| Profiling disclosed | If system profiles individuals, this is disclosed in privacy notice | Required if profiling | Privacy notice section |
| AI Act transparency | Art. 52 transparency obligations met (if applicable): inform that they are interacting with AI | Required for AI Act | User interface disclosure |
| Check | Requirement | Status | Evidence |
|---|---|---|---|
| Access process defined | Process for responding to access requests for AI data (inputs, outputs, profiles) | Required | Documented SOP |
| Explanation mechanism | Individual explanations can be generated on request | Required for Art. 22 | Technical capability verified |
| Human intervention available | Art. 22(3) human review process established | Required for solely automated decisions | Process document + trained staff |
| Contestation channel | Data subjects can contest AI decisions and have them reviewed | Required for Art. 22 | Appeal process document |
| Rectification process | Process for correcting AI input data and regenerating outputs | Required | Documented SOP |
| Erasure process | Process for deleting data from training sets, inference logs, embeddings | Required | Documented SOP |
| Check | Requirement | Status | Evidence |
|---|---|---|---|
| Training data documented | Data sources, size, collection method, preprocessing documented | Required | Data card / dataset documentation |
| Bias testing completed | Model tested for bias across protected attributes (gender, race, age, disability) | Required | Bias test report |
| Fairness metrics acceptable | Disparate impact ratio >0.8 (four-fifths rule) or equivalent metric within acceptable range | Required | Fairness metrics report |
| Data quality verified | Training data completeness, accuracy, representativeness verified | Required | Data quality report |
| Art. 9 data removed or justified | Special category data either removed or lawful basis documented | Required | Data classification report |
| Check | Requirement | Status | Evidence |
|---|---|---|---|
| Data encryption at rest | Training data and model weights encrypted (AES-256 or equivalent) | Required | Security configuration |
| Data encryption in transit | All API endpoints use TLS 1.2+ | Required | SSL certificate |
| Access controls | Role-based access to model, training data, and inference logs | Required | IAM policy |
| Audit logging | All model invocations logged with timestamp, input hash, output, user | Required | Logging configuration |
| Adversarial robustness | Model tested against common adversarial attacks relevant to its domain | Recommended | Security test report |
| Model versioning | Model versioned in registry with rollback capability | Required | MLflow / model registry |
| Check | Requirement | Status | Evidence |
|---|---|---|---|
| Performance monitoring | Dashboard tracking accuracy, latency, error rates in production | Required | Monitoring setup |
| Drift detection | Data drift and concept drift detection implemented | Required | Drift monitoring configuration |
| Bias monitoring | Post-deployment bias metrics tracked continuously | Required | Fairness monitoring dashboard |
| Incident response | Process for handling AI-related privacy incidents (e.g., discriminatory output, data leak) | Required | Incident response plan |
| Retraining schedule | Defined schedule for model retraining with fresh data | Required | Retraining plan |
| Retention enforcement | Automated deletion of inference logs and training data per retention schedule | Required | Retention policy + automation |
| Check | Requirement | Status | Evidence |
|---|---|---|---|
| Risk classification | System classified per Annex III | Required | Classification document |
| Technical documentation | Annex IV documentation complete | Required | Tech doc package |
| Risk management system | Art. 9 continuous risk management implemented | Required | Risk register + process |
| Conformity assessment | Internal or third-party conformity assessment completed | Required | Assessment report |
| EU Declaration of Conformity | Art. 47 declaration prepared | Required | Signed declaration |
| EU database registration | Art. 49 registration completed | Required | Registration confirmation |
| Role | Name | Approval | Date |
|---|---|---|---|
| ML Engineering Lead | [ ] Approved / [ ] Blocked | ||
| Data Protection Officer | [ ] Approved / [ ] Blocked | ||
| Information Security Officer | [ ] Approved / [ ] Blocked | ||
| Product Owner | [ ] Approved / [ ] Blocked | ||
| Legal Counsel | [ ] Approved / [ ] Blocked (high-risk only) |