From harness-claude
Guides end-to-end threat modeling from system decomposition, threat enumeration, risk rating, to mitigation tracking. Use for new projects, SDLC integration, team workshops, model reviews, and compliance audits.
npx claudepluginhub intense-visions/harness-engineering --plugin harness-claudeThis skill uses the workspace's default tool permissions.
> End-to-end threat modeling from system decomposition through threat enumeration, risk rating, and mitigation tracking -- the operational backbone of proactive security design
Generates threat models using OWASP Four-Question Framework and STRIDE methodology, producing matrices with risk ratings, mitigations, and prioritization for attack surface analysis and security reviews.
Conducts structured threat modeling using OWASP Four-Question Framework and STRIDE. Generates threat matrices with risk ratings, mitigations, prioritization for attack surface analysis and security architecture reviews.
Guides threat modeling using STRIDE, DREAD, attack trees, DFDs, and trust boundaries to identify, prioritize, and mitigate security risks in software design and SDLC.
Share bugs, ideas, or general feedback.
End-to-end threat modeling from system decomposition through threat enumeration, risk rating, and mitigation tracking -- the operational backbone of proactive security design
Systems that skip formal threat modeling consistently exhibit predictable vulnerability patterns: authentication bypasses on internal APIs assumed to be unreachable, unprotected administrative endpoints, missing encryption on sensitive data flows between services, and insufficient logging that makes incident response impossible. These are not exotic zero-day attacks -- they are foreseeable consequences of never asking "what could go wrong?" systematically.
The threat modeling process exists to make the implicit explicit before attackers do.
NIST estimates that vulnerabilities found in production are 6-10x more expensive to remediate than those identified during design. Threat modeling shifts security left to where it is cheapest and most effective.
Scope the model. Define system boundaries explicitly -- what is in-scope and what is out-of-scope for this threat modeling exercise. Name the assets worth protecting: user PII, financial transaction data, session tokens, API keys, cryptographic secrets, audit logs. Identify regulatory requirements (GDPR, HIPAA, PCI-DSS, SOC2) that impose specific security controls on in-scope data.
A model without a defined scope either balloons to unworkable size or silently excludes critical components. Write the scope statement in the threat model document before drawing anything.
Build the Data Flow Diagram. Use standard DFD notation:
Label every flow with the data it carries -- not "request/response" but "JWT token in Authorization header," "credit card number in POST body," "password hash read from users table." Ambiguous labels produce ambiguous threat analysis.
Identify trust boundaries. Every point where data crosses from one trust level to another is an attack surface. Draw dashed lines on the DFD at each boundary. Common trust boundaries include:
See security-trust-boundaries for deep treatment of boundary analysis.
Enumerate threats. Apply a structured methodology to each DFD element and each trust boundary crossing. STRIDE is the most accessible for development teams (see security-threat-modeling-stride). PASTA (Process for Attack Simulation and Threat Analysis) is a 7-stage risk-centric process suited for mature security organizations. Attack trees provide depth analysis for individual high-value targets.
Document each threat with: a unique ID, a natural-language description following the form "An attacker could [action] by [method], resulting in [impact]," the affected DFD component, the threat category, and the specific data or asset at risk.
Rate risk. Assign each threat a risk score using one of two common frameworks:
Whichever framework you choose, apply it consistently across all threats. Mixed rating systems produce incomparable priorities. Rank threats by composite score in descending order.
Define mitigations. For each threat above the risk acceptance threshold, specify:
Mitigations without owners do not get implemented. Mitigations without verification criteria cannot be audited.
Track and iterate. Store the threat model alongside the codebase -- a docs/security/threat-model.md file or a dedicated section in the project wiki that is versioned with the code.
Review the model quarterly at minimum and on every significant architecture change (new service, new data store, new external integration, new deployment target). Add new threats as features are added. Close mitigated threats with evidence (link to the PR that implemented the control, the test that verifies it, or the configuration that enforces it). A threat model that is not maintained decays into a false sense of security.
STRIDE is the best starting point for development teams new to threat modeling. Its mnemonic structure provides a checklist that prevents category omission, and its element-based approach maps naturally to system architecture diagrams that developers already understand. Limitation: STRIDE is classification-oriented -- it tells you what categories of threats exist but does not inherently prioritize them or model attacker behavior chains.
PASTA (Process for Attack Simulation and Threat Analysis) is a 7-stage risk-centric framework: define objectives, define technical scope, application decomposition, threat analysis, vulnerability analysis, attack modeling, and risk/impact analysis. PASTA produces more rigorous output than STRIDE because it incorporates business impact analysis and attacker simulation, but it requires more security expertise and more time. Best suited for organizations with dedicated application security teams.
Attack trees model how an attacker achieves a specific goal by decomposing it into sub-goals connected by AND/OR logic. An attack tree for "steal user credentials" might decompose into "exploit SQL injection OR phish an admin OR compromise a backup," with each branch decomposing further. Attack trees provide depth that STRIDE and PASTA do not, but they analyze one goal at a time rather than surveying the whole system.
For most teams, the recommended approach is: STRIDE for breadth (survey the whole system), then attack trees for depth (analyze the top 5 threats in detail).
A 90-minute workshop with 4-6 participants is the most effective format for producing a threat model that the team understands and owns.
Participants: 2 developers who built the system, 1 architect who designed it, 1 security engineer (or the most security-aware team member), 1 QA engineer, and 1 product owner who understands business impact. If no dedicated security engineer is available, the architect assumes that role.
Structure:
Output: A threat register document committed to the repository within 24 hours of the workshop. The document becomes a living artifact updated in subsequent sprints.
For teams that cannot dedicate 90 minutes, integrate threat modeling into existing ceremonies:
Per-story threat check (5 minutes during backlog refinement): For each user story, ask three questions:
If any answer is "yes," add a STRIDE analysis task to the story's acceptance criteria.
Architecture Decision Record (ADR) threat appendix: When writing an ADR for a significant architecture change, append a threat analysis section that applies STRIDE to the new or modified components. This catches threats at the decision point rather than after implementation.
A complete threat model document stored in the repository should contain these sections:
Store this as docs/security/threat-model.md or in a security/ directory within the project. The document must be reviewable in pull requests so that architecture changes trigger threat model updates in the same review cycle.
Track these metrics to assess whether threat modeling is producing value:
Threat modeling after implementation. If the system is already built and deployed, the threat model becomes a retrospective documentation exercise rather than a design influence. Threats found post-implementation are dramatically more expensive to remediate because they require rework of existing code, re-testing of deployed functionality, and often coordination of a security patch release. Model before you build. The cheapest vulnerability to fix is the one you designed out.
Security team owns the threat model alone. When the security team produces the threat model in isolation and hands it to developers as a list of mandates, two things go wrong: developers do not understand the reasoning behind the controls (and therefore implement them incorrectly or resentfully), and the threat model does not reflect the actual system (because the security team does not know all the data flows). The development team must participate in and co-own the threat model.
DFD too detailed or too abstract. A DFD with 50 processes is unworkable -- applying STRIDE to each element produces hundreds of threats, most of which are low-value duplicates. A DFD with 2 boxes ("frontend" and "backend") misses all internal threats, all service-to-service attack vectors, and all data store risks. Target 5-15 DFD elements per model. If a subsystem is high-risk (e.g., the authentication service), decompose it into its own DFD at a finer granularity and run a separate threat model.
No risk rating. Listing 200 threats without prioritization leads to analysis paralysis. The development team stares at a wall of threats and does not know where to start. Not all threats are equal -- a low-likelihood, low-impact information disclosure on a public status page is not comparable to a high-likelihood, high-impact elevation of privilege on the admin API. Rate them, rank them, and address the top 10 first.
Confusing threat modeling with penetration testing. Threat modeling is a design-time analytical activity that identifies what could go wrong based on the system's architecture. Penetration testing is a validation-time empirical activity that confirms whether specific vulnerabilities exist in the running system. Threat modeling without penetration testing is unverified theory. Penetration testing without threat modeling is undirected probing. They are complementary -- threat modeling tells you where to look; penetration testing tells you what you actually find.
Treating threat modeling as a document, not a process. Teams that produce a polished PDF threat model and file it away have completed a documentation exercise, not a security activity. The value of threat modeling is in the ongoing conversation it creates -- the team's shared understanding of where the system is vulnerable and what trade-offs were made. Update the model when the architecture changes, review it during sprint planning, and reference it when making design decisions.