Guides LINDDUN privacy threat modeling: DFD analysis for 7 threats (Linking, Identifying, Non-repudiation, Detecting, Disclosure, Unawareness, Non-compliance), threat trees, mitigations via design patterns.
npx claudepluginhub mukul975/privacy-data-protection-skills --plugin privacy-by-design-skillsThis skill uses the workspace's default tool permissions.
LINDDUN is a systematic privacy threat modeling methodology developed by the DistriNet research group at KU Leuven. It provides a structured approach to identifying privacy threats in software systems through Data Flow Diagram (DFD) analysis and threat tree catalogs. LINDDUN stands for seven privacy threat categories:
Conducts multi-round deep research on GitHub repos via API and web searches, generating markdown reports with executive summaries, timelines, metrics, and Mermaid diagrams.
Dynamically discovers and combines enabled skills into cohesive, unexpected delightful experiences like interactive HTML or themed artifacts. Activates on 'surprise me', inspiration, or boredom cues.
Generates images from structured JSON prompts via Python script execution. Supports reference images and aspect ratios for characters, scenes, products, visuals.
LINDDUN is a systematic privacy threat modeling methodology developed by the DistriNet research group at KU Leuven. It provides a structured approach to identifying privacy threats in software systems through Data Flow Diagram (DFD) analysis and threat tree catalogs. LINDDUN stands for seven privacy threat categories:
LINDDUN complements security threat modeling (STRIDE) by focusing specifically on privacy threats. While STRIDE addresses confidentiality, integrity, and availability, LINDDUN addresses unlinkability, anonymity, plausible deniability, undetectability, confidentiality of data content, content awareness, and policy compliance.
Definition: An adversary can sufficiently distinguish whether two items of interest (IOI) are related or not within a particular context.
DFD element applicability: Data flows, data stores, processes.
Examples:
Threat trees (selected):
Mitigations: HIDE (Dissociate, Mix), SEPARATE (Isolate), MINIMIZE (Strip), differential privacy, pseudonymization with context separation.
Definition: An adversary can sufficiently identify a data subject within a set of data subjects.
DFD element applicability: Data flows, data stores, external entities.
Examples:
Threat trees (selected):
Mitigations: HIDE (Encrypt, Obfuscate), MINIMIZE (Strip), ABSTRACT (Group, Summarize), k-anonymity, differential privacy.
Definition: A data subject is unable to deny having performed an action or being associated with specific data.
DFD element applicability: Data flows, processes, data stores.
Examples:
Threat trees (selected):
Mitigations: HIDE (Mix), group signatures, ring signatures, deniable encryption.
Definition: An adversary can sufficiently distinguish whether an item of interest (IOI) exists or not.
DFD element applicability: Data flows, processes.
Examples:
Threat trees (selected):
Mitigations: HIDE (Mix, Obfuscate), cover traffic, onion routing, steganography, ORAM.
Definition: Personal data is disclosed to or accessed by unauthorized parties.
DFD element applicability: Data flows, data stores, processes.
Examples:
Threat trees (selected):
Mitigations: HIDE (Encrypt), access control, TLS 1.3, field-level encryption, secure deletion, DLP systems.
Definition: Data subjects are unaware of the collection, processing, or sharing of their personal data.
DFD element applicability: External entities (data subjects), processes.
Examples:
Threat trees (selected):
Mitigations: INFORM (Supply, Notify, Explain), layered privacy notices, just-in-time notifications, consent management.
Definition: The system or organization fails to comply with applicable privacy legislation, policies, or standards.
DFD element applicability: All DFD elements.
Examples:
Threat trees (selected):
Mitigations: ENFORCE (Create, Maintain, Uphold), DEMONSTRATE (Record, Audit, Report), GDPR compliance framework.
Create a Data Flow Diagram (DFD) of the system showing:
For each DFD element, determine which LINDDUN threat categories apply:
| DFD Element | L | I | N | D | DD | U | NC |
|---|---|---|---|---|---|---|---|
| External entity (data subject) | X | X | |||||
| External entity (third party) | X | X | |||||
| Process | X | X | X | X | X | X | |
| Data store | X | X | X | X | X | ||
| Data flow | X | X | X | X | X | X |
For each applicable threat category on each DFD element, walk through the threat tree catalog to identify specific threats. Document each threat with:
Rank threats by risk score. Apply risk acceptance thresholds:
Map each threat to privacy design patterns and specific technical controls. Document the mitigation strategy and responsible team.
Verify that selected mitigations adequately address each threat. Update the DFD to reflect implemented controls. Re-assess residual risk.