From harness-claude
Identifies trust boundaries in system architectures to guide security control placement. Use for reviewing diagrams, microservices auth, API validation, network segmentation, and blast radius assessment.
npx claudepluginhub intense-visions/harness-engineering --plugin harness-claudeThis skill uses the workspace's default tool permissions.
> Every security control exists because data crosses from a trusted zone to a less-trusted one -- identify the boundaries first, then concentrate defenses there
Identifies vulnerabilities, threats, and mitigations using STRIDE methodology, trust boundary mapping, and defense-in-depth for systems handling PII, auth, payments, APIs.
Guides zero trust security architecture design covering never trust always verify, microsegmentation, identity-based access, ZTNA, and posture evaluation.
Conducts STRIDE threat modeling with DFD trust boundaries and DREAD scoring for auth, file uploads, payments, webhooks, OAuth, APIs, CI/CD, and security reviews of user data handling.
Share bugs, ideas, or general feedback.
Every security control exists because data crosses from a trusted zone to a less-trusted one -- identify the boundaries first, then concentrate defenses there
The majority of exploitable vulnerabilities exist at trust boundary crossings -- the points where data moves between zones of different privilege levels. SQL injection occurs at the boundary between application code and the database query engine. XSS occurs at the boundary between server-generated content and the browser's rendering engine. SSRF occurs at the boundary between user-controlled input and server-side HTTP clients. API authorization failures occur at the boundary between an authenticated session and resource-level access control.
If you cannot draw your trust boundaries on an architecture diagram, you cannot reason about where your controls should be, and you will place them in the wrong locations -- or omit them entirely.
Enumerate trust zones. A trust zone is a region where all components operate at the same privilege level and share the same trust assumptions. Identify each zone by asking: "If component A is compromised within this zone, what else can the attacker reach without crossing another control?"
Common zones in modern architectures:
Each zone should have a clearly stated trust assumption documented alongside the architecture: "Components in this zone have been authenticated at the gateway but have not been authorized for specific resources."
Draw boundaries between zones. Every point where data crosses from one zone to another is a trust boundary. Mark these with dashed lines on architecture diagrams -- this is standard DFD notation for trust boundaries.
Label each boundary with:
Apply the boundary security principle. At every trust boundary crossing, apply all five control categories:
Classify boundary types. Different boundary types require different control implementations:
Assess blast radius per zone. For each zone, answer: "If an attacker gains code execution in this zone, what is the maximum damage they can inflict before hitting another boundary?" The answer defines the blast radius.
Minimize blast radius through:
Validate boundary effectiveness. For each identified trust boundary, verify these invariants:
Any "yes" answer represents a boundary gap that must be mitigated before the system can be considered secure at that crossing point.
Most exploitable vulnerabilities arise not from boundaries that were analyzed and found weak, but from boundaries that developers did not recognize as boundaries at all. The most dangerous implicit boundary is the service-to-service call within a "trusted" network.
Example: Microservice A receives user input, performs some validation, and sends a transformed payload to Microservice B via an internal message queue. Developers assume "B only receives messages from A, so B does not need input validation." This assumption is wrong for three reasons:
The rule: treat every deserialization point as a trust boundary. If a component parses JSON, Protocol Buffers, XML, YAML, or any structured data from any external source -- including "trusted internal" sources -- it must validate the schema and reject malformed input. The cost of redundant validation is negligible. The cost of a missing boundary is a breach.
In Kubernetes environments, trust boundaries are layered and each layer has distinct control mechanisms:
Each layer is a trust boundary with its own authentication, authorization, and audit mechanism. Defense in depth means that compromising one layer does not automatically grant access to the next.
Traditional network architecture establishes a single hard boundary -- the firewall -- and treats everything inside as trusted. This model fails because:
Zero trust eliminates the concept of a trusted interior. Every component boundary is a trust boundary. Every request is authenticated and authorized regardless of network position. This is not about adding more firewalls -- it is about making every service enforce its own boundary controls independently.
The practical implication: service-to-service authentication (mTLS, JWT validation, signed requests) is mandatory, not optional. Network location is no longer a proxy for trust. See security-zero-trust-principles for the complete zero trust architecture model.
When documenting trust boundaries for a system, verify that each of these common boundary types has been identified and classified:
Missing any of these boundaries means missing the threats that exploit them.
Not all trust boundaries need identical controls. The strength of controls at a boundary should be proportional to the sensitivity of the data crossing it:
Applying maximum controls uniformly across all boundaries is wasteful and creates operational friction that leads teams to bypass controls entirely. Match control strength to data sensitivity.
Boundary controls must be tested, not assumed. For each trust boundary, write tests that verify:
These tests serve double duty: they verify the controls work today, and they prevent future regressions when the boundary code is refactored.
The "trusted internal network" assumption. Assuming that anything inside the VPC, firewall, or corporate network is inherently safe. Internal networks are compromised routinely -- lateral movement is the single most common post-exploitation technique in breach reports (Mandiant M-Trends, Verizon DBIR). Every service-to-service call crosses a trust boundary even within the same network segment. The internal network is a transport layer, not a security control.
Validating input at the perimeter only. Placing all input validation at the API gateway or edge proxy and trusting all data downstream. This creates a single point of failure: if any downstream service is reachable by another path (internal message queue, batch job, admin endpoint, debugging interface, or a future integration not yet built), the validation is completely bypassed. Every component must validate input at its own boundary, regardless of what upstream components may have done.
Symmetric trust across an asymmetric boundary. Two services that mutually trust each other equally when the data flow is asymmetric in risk. If Service A sends user-controlled data to Service B, then B must validate that data even if A is a "trusted" internal service -- because A might be relaying attacker input without modification or with insufficient sanitization. Trust must be proportional to the risk of the data, not the reputation of the sender.
Missing deserialization boundaries. Deserializing data from any external source (JSON.parse, pickle.loads, Java ObjectInputStream, YAML.load, XML parsing) without treating the deserialization point as a trust boundary. Deserialization of untrusted data is effectively code execution in many languages and frameworks. Every deserialization of data from a less-trusted source must: validate against a strict schema, reject unexpected types and fields, enforce maximum payload size, and use safe deserialization methods (e.g., JSON.parse is generally safe; Java ObjectInputStream is not without explicit class allowlisting; Python pickle is never safe for untrusted input).