From privacy-engineering-skills
Conducts LINNDUN privacy threat modeling across seven categories with DFD analysis, threat trees, mitigations, and STRIDE integration. For privacy risk assessment in software design.
npx claudepluginhub mukul975/privacy-data-protection-skills --plugin privacy-engineering-skillsThis skill uses the workspace's default tool permissions.
LINDDUN is a systematic privacy threat modeling methodology developed by the DistriNet research group at KU Leuven. It provides a structured approach to identify and mitigate privacy threats in software systems. The acronym represents seven privacy threat categories that map to violations of privacy properties defined in ISO/IEC 29100.
Conducts multi-round deep research on GitHub repos via API and web searches, generating markdown reports with executive summaries, timelines, metrics, and Mermaid diagrams.
Dynamically discovers and combines enabled skills into cohesive, unexpected delightful experiences like interactive HTML or themed artifacts. Activates on 'surprise me', inspiration, or boredom cues.
Generates images from structured JSON prompts via Python script execution. Supports reference images and aspect ratios for characters, scenes, products, visuals.
LINDDUN is a systematic privacy threat modeling methodology developed by the DistriNet research group at KU Leuven. It provides a structured approach to identify and mitigate privacy threats in software systems. The acronym represents seven privacy threat categories that map to violations of privacy properties defined in ISO/IEC 29100.
Definition: The ability to associate two or more data items or actions with an individual or group, beyond what is intended by the data subject.
Privacy Property Violated: Unlinkability
Threat Scenarios:
Mitigation Strategies:
| Strategy | Technique | Implementation |
|---|---|---|
| Data minimization | Collect only necessary attributes | Review each data field against stated purpose |
| Pseudonymization | Replace identifiers with tokens | Use cryptographic pseudonymization with key separation |
| Mix networks | Obscure communication patterns | Route messages through anonymity networks |
| Aggregation | Present only group-level data | Enforce minimum group size (k>=5) for any query result |
| Session unlinkability | Prevent cross-session tracking | Rotate session tokens, avoid persistent identifiers |
Definition: The ability to identify a data subject from a set of data items, connecting them to a known individual.
Privacy Property Violated: Anonymity
Threat Scenarios:
Mitigation Strategies:
| Strategy | Technique | Implementation |
|---|---|---|
| Anonymization | Remove direct identifiers | Strip names, emails, SSNs before processing |
| k-Anonymity | Generalize quasi-identifiers | Ensure each record shares attributes with k-1 others |
| Differential privacy | Add calibrated noise | Apply epsilon-differential privacy to query responses |
| Data masking | Obscure identifying fields | Replace with realistic synthetic values |
| Access control | Restrict who can see raw data | Implement need-to-know access with purpose verification |
Definition: The inability of a data subject to deny having performed an action, even when such denial would be desirable for privacy.
Privacy Property Violated: Plausible deniability
Threat Scenarios:
Mitigation Strategies:
| Strategy | Technique | Implementation |
|---|---|---|
| Deniable encryption | Enable plausible deniability | Use deniable encryption schemes for sensitive data |
| Group signatures | Hide individual identity in group | Implement group signature schemes for authenticated actions |
| Minimal logging | Log only what is legally required | Review and minimize audit trail scope |
| Aggregate reporting | Report actions at group level | Aggregate activity reports rather than individual-level |
| Configurable receipts | Let users control acknowledgments | Allow opt-out of read receipts and delivery confirmations |
Definition: The ability to determine whether a data subject has been involved in an action or is present in a dataset, even without identifying them specifically.
Privacy Property Violated: Undetectability
Threat Scenarios:
Mitigation Strategies:
| Strategy | Technique | Implementation |
|---|---|---|
| Traffic padding | Mask communication patterns | Generate dummy traffic to obscure real patterns |
| Steganography | Hide data within other data | Embed sensitive communications in innocuous content |
| Constant-time operations | Prevent timing analysis | Implement constant-time algorithms for sensitive operations |
| Oblivious RAM | Hide access patterns | Use ORAM protocols for privacy-critical data access |
| Differential privacy | Provide membership privacy | Apply differential privacy to prevent membership inference |
Definition: Unauthorized exposure of personal data to parties who should not have access.
Privacy Property Violated: Confidentiality
Threat Scenarios:
Mitigation Strategies:
| Strategy | Technique | Implementation |
|---|---|---|
| Encryption | Protect data at rest and in transit | AES-256 at rest, TLS 1.3 in transit |
| Access control | Enforce least privilege | RBAC with regular access reviews |
| Input validation | Prevent injection attacks | Parameterized queries, input sanitization |
| API field filtering | Return only requested fields | Implement field-level access control in APIs |
| DLP | Detect and prevent data exfiltration | Deploy DLP at network egress and endpoints |
Definition: Data subjects being insufficiently aware of data processing activities, their rights, or the consequences of providing or withholding data.
Privacy Property Violated: Transparency, Intervenability
Threat Scenarios:
Mitigation Strategies:
| Strategy | Technique | Implementation |
|---|---|---|
| Layered notices | Progressive disclosure | Short notice + full policy + just-in-time |
| Privacy dashboards | Centralized visibility | User-facing dashboard showing all data held |
| Consent management | Granular, informed consent | Purpose-specific consent with clear descriptions |
| Explainable AI | Algorithmic transparency | Provide meaningful explanations of automated decisions |
| Right facilitation | Easy rights exercise | Self-service portal for access, correction, deletion |
Definition: Processing personal data in ways that violate applicable laws, regulations, standards, or organizational policies.
Privacy Property Violated: Policy and consent compliance
Threat Scenarios:
Mitigation Strategies:
| Strategy | Technique | Implementation |
|---|---|---|
| Compliance mapping | Map processing to legal bases | Document legal basis per processing activity per jurisdiction |
| Automated enforcement | Technical compliance controls | Automated retention enforcement, consent verification |
| DPIA process | Impact assessment for high-risk processing | Mandatory DPIA before deploying new high-risk processing |
| Regulatory monitoring | Track legal developments | Subscribe to regulatory updates, conduct periodic gap analysis |
| Audit program | Verify ongoing compliance | Annual compliance audits with corrective action tracking |
[Data Subject] ---(personal data)--> [Web Application]
^ |
| v
[Notice] [Application Server]
|
+-------------+-------------+
| | |
v v v
[User Database] [Analytics DB] [Third-Party API]
(encrypted) (pseudonymized) (data sharing)
Annotate each element with:
| DFD Element | L | I | N | D(etect) | D(isclose) | U | N(on-comply) |
|---|---|---|---|---|---|---|---|
| Data flows | X | X | X | X | |||
| Data stores | X | X | X | X | |||
| Processes | X | X | X | X | X | X | X |
| External entities | X | X | X | X | X |
For each applicable threat category per DFD element, construct a threat tree:
Identifying Threat to User Database
├── Direct identifier exposure
│ ├── SQL injection reveals raw PII
│ ├── Backup media contains unencrypted PII
│ └── Admin access to production database
├── Quasi-identifier attack
│ ├── Combination of age + ZIP + gender
│ └── Temporal correlation of records
└── Inference attack
├── Aggregate query with small group size
└── Differential attack across query results
Use a risk matrix to prioritize identified threats:
| Likelihood / Impact | Negligible | Limited | Significant | Maximum |
|---|---|---|---|---|
| Very Likely | Medium | High | Critical | Critical |
| Likely | Low | Medium | High | Critical |
| Possible | Low | Medium | Medium | High |
| Unlikely | Low | Low | Medium | Medium |
| Rare | Low | Low | Low | Medium |
| LINDDUN Category | Related STRIDE Category | Overlap Area |
|---|---|---|
| Data Disclosure | Information Disclosure | Both address unauthorized data exposure |
| Non-compliance | Tampering | Integrity of consent records |
| Detecting | Information Disclosure | Metadata leakage |
| Identifying | Information Disclosure | PII exposure |
| Non-repudiation | Repudiation | Opposing perspectives on the same property |