From healthcare-privacy-skills
Guides HIPAA and EU AI Act compliance for healthcare AI in clinical decision support systems, covering PHI handling, model transparency, patient rights, FDA coordination, and bias monitoring.
npx claudepluginhub mukul975/privacy-data-protection-skills --plugin healthcare-privacy-skillsThis skill uses the workspace's default tool permissions.
Artificial intelligence in healthcare introduces privacy challenges that sit at the intersection of established health privacy law (HIPAA, HITECH) and emerging AI regulation (EU AI Act, FDA regulatory framework, proposed state AI laws). Clinical decision support (CDS) systems, diagnostic AI, and predictive analytics operate on protected health information, creating obligations under HIPAA while...
Acquire memory dumps from live systems/VMs and analyze with Volatility 3 for processes, networks, DLLs, injections in incident response or malware hunts.
Provides x86-64/ARM disassembly patterns, calling conventions, control flow recognition for static analysis of executables and compiled binaries.
Identifies anti-debugging checks like IsDebuggerPresent, NtQueryInformationProcess in Windows binaries; suggests bypasses via patches/hooks/scripts for malware analysis, CTFs, authorized RE.
Artificial intelligence in healthcare introduces privacy challenges that sit at the intersection of established health privacy law (HIPAA, HITECH) and emerging AI regulation (EU AI Act, FDA regulatory framework, proposed state AI laws). Clinical decision support (CDS) systems, diagnostic AI, and predictive analytics operate on protected health information, creating obligations under HIPAA while simultaneously falling within the scope of AI-specific regulation when deployed in high-risk clinical contexts. This skill addresses the complete privacy lifecycle of healthcare AI — from training data acquisition through model deployment and patient interaction — ensuring compliance with both health privacy and AI governance frameworks.
| Framework | Applicability to Healthcare AI | Key Requirements |
|---|---|---|
| HIPAA Privacy Rule (45 CFR §164) | AI systems processing PHI at covered entities or BAs | Authorization or TPO exception for PHI use; minimum necessary; individual rights |
| HIPAA Security Rule (45 CFR §164.312) | ePHI used in AI training, inference, and storage | Access controls, audit trails, encryption, integrity controls |
| EU AI Act (Regulation 2024/1689) | AI systems deployed in EU healthcare or processing EU patient data | High-risk classification for medical devices; conformity assessment; transparency |
| FDA Regulatory Framework | AI/ML-based Software as a Medical Device (SaMD) | 510(k), De Novo, or PMA pathway; GMLP (Good Machine Learning Practice); total product lifecycle approach |
| FTC Act §5 | AI making health-related decisions affecting consumers | Unfair or deceptive practices; Health Breach Notification Rule for non-HIPAA entities |
| State AI Laws | Emerging state legislation (Colorado AI Act SB24-205, Illinois AI Video Interview Act) | Algorithmic impact assessments; notice and opt-out for automated decisions |
Under Annex III of the EU AI Act, the following healthcare AI systems are classified as high-risk:
| Category | AI Act Reference | Examples |
|---|---|---|
| Medical devices (AI-based) | Annex III, §5(a) | AI diagnostic imaging (radiology, pathology, dermatology), AI-assisted surgery planning |
| In vitro diagnostic medical devices (AI-based) | Annex III, §5(a) | AI-based genetic analysis, AI companion diagnostics |
| Safety components of medical devices | Annex III, §5(b) | AI monitoring in ICU, AI-driven infusion pump dosing |
High-risk AI systems must comply with AI Act requirements including risk management (Art. 9), data governance (Art. 10), transparency (Art. 13), human oversight (Art. 14), accuracy/robustness (Art. 15), and conformity assessment (Art. 43).
| Lawful Basis | HIPAA Provision | Applicability | Conditions |
|---|---|---|---|
| Treatment | §164.506(c)(1) | AI models trained to support individual patient treatment decisions | Model must directly serve treatment function; minimum necessary applies |
| Healthcare Operations | §164.506(c)(4) | Quality assessment, population health analytics, clinical decision support development | Must qualify as healthcare operations under §164.501 definition |
| Research | §164.512(i) | Academic or institutional research developing AI models | IRB/Privacy Board approval; authorization or waiver of authorization; data use agreement for limited datasets |
| De-identified data | §164.514(a) | Training on data that meets safe harbor or expert determination de-identification | No HIPAA restrictions once properly de-identified; re-identification risk from AI model memorization must be assessed |
| Authorization | §164.508 | Individual authorization for specific AI training use | Valid authorization meeting §164.508(c) requirements; may be impractical at scale |
Asclepius Health Network has established an AI Data Governance Committee that reviews all AI training data requests:
Training Data Request Workflow:
| Risk | Description | Mitigation |
|---|---|---|
| Training data memorization | Large models (transformers, LLMs) can memorize and reproduce verbatim training data including PHI | Differential privacy (DP-SGD), training data deduplication, memorization testing pre-deployment |
| Membership inference | Adversary determines whether a specific patient's data was in the training set | Output perturbation, model regularization, membership inference attack testing |
| Model inversion | Adversary reconstructs patient features from model outputs | Limit output granularity, add noise to confidence scores, restrict API access |
| Attribute inference | Model reveals sensitive attributes (HIV status, substance use) not provided as input | Feature correlation analysis, fairness-aware training, output filtering |
| Training data leakage via model explanation | SHAP/LIME explanations may reveal individual patient contributions | Aggregate explanations; use synthetic examples for patient-facing explanations |
While HIPAA does not explicitly address AI transparency, several provisions create de facto transparency obligations:
For high-risk healthcare AI systems under the AI Act:
| Requirement | AI Act Article | Implementation |
|---|---|---|
| Technical documentation | Art. 11 | Complete description of AI system including training methodology, data governance, performance metrics, known limitations |
| Record-keeping | Art. 12 | Automatic logging of AI system operations enabling traceability |
| Transparency to users | Art. 13 | Instructions for use enabling healthcare providers to interpret outputs and exercise oversight; disclosure of performance metrics, known biases, and foreseeable misuse |
| Human oversight | Art. 14 | AI systems designed to be effectively overseen by natural persons; override capability; ability to disregard AI output |
| Accuracy and robustness | Art. 15 | Declared accuracy levels; resilience against errors, faults, and adversarial attacks |
For each deployed AI system, Asclepius maintains:
Model Card (following the Mitchell et al. framework, adapted for healthcare):
Patient-Facing Disclosure:
| Right | Application to Healthcare AI | Asclepius Implementation |
|---|---|---|
| Right of Access (§164.524) | Patient may access AI-generated risk scores, predictions, and recommendations in their medical record | AI outputs stored in EHR are accessible through the patient portal; explanations provided in plain language |
| Right to Amend (§164.526) | Patient may request amendment of AI-generated entries if believed to be inaccurate | AI-generated entries clearly labeled; amendment requests reviewed by treating physician and AI governance committee |
| Right to Accounting of Disclosures (§164.528) | AI system disclosures of PHI (e.g., to a cloud-based AI service) must be tracked | All API calls to AI inference services logged; BA disclosures tracked in disclosure accounting system |
| Right to Restrict (§164.522) | Patient may request restrictions on AI processing | Asclepius honors requests to exclude specific data from AI-assisted analytics where clinically feasible |
HIPAA does not include a direct analog to GDPR Article 22 (right not to be subject to automated decision-making). However:
The FDA regulates AI/ML-based clinical decision support as Software as a Medical Device (SaMD) when it meets the device definition and is not excluded under the 21st Century Cures Act §3060(a) exemption for certain CDS:
CDS Not Regulated as Device (Cures Act Exemption):
All four criteria must be met. AI systems that process imaging (radiology AI, pathology AI) or make autonomous decisions do NOT qualify for the exemption.
| FDA Pathway | Privacy Considerations |
|---|---|
| 510(k) premarket notification | Training data representativeness documentation; algorithmic bias assessment; cybersecurity controls for ePHI |
| De Novo classification | Novel AI technology risk-benefit analysis including privacy risks; post-market surveillance plan |
| PMA (Premarket Approval) | Full clinical evidence including training data provenance; long-term monitoring of AI performance across demographics |
| Predetermined change control plan | Documentation of how model updates will maintain privacy protections; re-validation requirements after model retraining |
FDA, Health Canada, and MHRA jointly published 10 GMLP principles (October 2021) with privacy-relevant requirements:
Pre-Deployment Assessment:
Post-Deployment Monitoring:
| Role | Responsibilities |
|---|---|
| Chief Privacy Officer | Overall accountability for PHI use in AI; approves AI training data requests; reports to Board |
| CISO | Security controls for AI infrastructure; penetration testing of AI systems; incident response |
| Chief Medical Informatics Officer | Clinical appropriateness of AI systems; human oversight protocols; clinician training |
| AI Ethics Committee | Reviews AI use cases for ethical implications including privacy; includes patient advocate representation |
| AI Data Governance Committee | Reviews training data requests; ensures de-identification adequacy; manages data use agreements |
| Model Risk Management | Validates AI model performance; tests for memorization and bias; manages model inventory |