npx claudepluginhub intense-visions/harness-engineering --plugin harness-claudeThis skill uses the workspace's default tool permissions.
> Memory corruption vulnerabilities account for 70% of critical CVEs in C/C++ codebases --
Use when performing penetration testing targeting memory corruption vulnerabilities in native applications. Keywords: buffer overflow, heap overflow, use-after-free, integer overflow, format string, stack overflow, type confusion, out-of-bounds read/write
Guide secure migration of code from memory-unsafe languages (C, C++, Assembly) to memory-safe languages (Rust, Go, Java, C#, Swift). Use when migrating or rewriting legacy C/C++ code, designing FFI boundaries between safe and unsafe code, writing new modules in existing C/C++ codebases, reviewing mixed-language projects, planning memory safety roadmaps, or when an AI agent is about to generate new C/C++ code that could be written in a memory-safe language instead. Also triggers on CISA/NSA memory safety compliance discussions.
Use AddressSanitizer to detect memory safety bugs in C/C++ programs. Identifies use-after-free, buffer overflow, memory leaks, and other memory errors.
Share bugs, ideas, or general feedback.
Memory corruption vulnerabilities account for 70% of critical CVEs in C/C++ codebases -- choose memory-safe languages by default, and when you cannot, understand the vulnerability classes and mitigations
Memory safety vulnerabilities are the most dangerous class of software bugs because they enable arbitrary code execution -- the attacker does not merely read data or cause a crash, they execute code of their choosing on the target system:
The 70% statistic means that choosing a memory-safe language eliminates the majority of the vulnerabilities that enable remote code execution, privilege escalation, and information disclosure. Logic bugs, authentication bugs, and authorization bugs constitute the remaining 30% -- they remain regardless of language choice.
Choose memory-safe languages for all new projects. The default choice for any new system, service, or component should be a memory-safe language unless there is a specific, documented technical reason requiring otherwise:
Understand the memory safety vulnerability taxonomy. Each vulnerability class has a distinct mechanism, exploitation technique, and set of mitigations:
size = count * element_size overflows if both values are large, producing a small
allocation. The subsequent write overflows the undersized buffer. Use checked arithmetic
(checked_mul in Rust, Math.addExact in Java) or validate input ranges before
arithmetic.printf or equivalent functions. The attacker uses format specifiers (%x to read
stack memory, %n to write to memory) to achieve arbitrary read/write.When using C/C++, apply layered defense-in-depth mitigations. No single mitigation is sufficient; all should be enabled simultaneously:
-fstack-protector-strong (stack canaries for functions
with buffers), -D_FORTIFY_SOURCE=2 (bounds-checked versions of libc functions like
memcpy, sprintf), -fPIE -pie (position-independent executable, required for full
ASLR), -Wformat -Wformat-security (format string warnings)Isolate native/FFI boundaries as trust boundaries. When a memory-safe application calls into C/C++ libraries via FFI (Foreign Function Interface), JNI (Java Native Interface), or WASM, treat the native boundary as a security boundary. Validate all inputs passed to native functions -- a buffer overflow in the native library can corrupt the memory-safe application's heap. Limit the native component's access to system resources. Use sandboxing (seccomp-bpf on Linux, WASM linear memory isolation, pledge/unveil on OpenBSD) to contain exploitation of the native component.
Evaluate native dependencies in your supply chain. For every C/C++ dependency: Is there a memory-safe alternative? (rustls instead of OpenSSL, ring instead of libcrypto, image-rs instead of libpng.) Is the dependency actively maintained? What is its CVE history -- how many memory safety CVEs in the last 3 years, and how quickly were they patched? Does the project use sanitizers and fuzzing in CI? Dependencies with poor memory safety hygiene are ticking time bombs in your supply chain.
How a stack buffer overflow enables code execution -- the full chain: A function
allocates a 64-byte buffer on the stack. The return address is stored at a known offset
above the buffer. The attacker provides 80 bytes of input: 64 bytes to fill the buffer,
some padding to reach the return address, and a new address value. When the function
returns, ret pops the attacker-controlled address into the instruction pointer. Classic
exploitation jumps to injected shellcode on the stack (blocked by DEP/NX) or to a known
library function like system() (blocked by ASLR). Modern exploitation uses
Return-Oriented Programming (ROP): the attacker chains short instruction sequences
("gadgets") already present in executable memory, each ending in ret, to build arbitrary
computation. Stack canaries add a random value between the buffer and the return address;
overflow that corrupts the return address also corrupts the canary, which is detected
before ret executes. All mitigations (canary + ASLR + DEP + CFI) should be enabled
simultaneously because each has known bypass techniques.
Rust's ownership model -- how it prevents memory bugs at compile time: Every value in
Rust has exactly one owner. When ownership is transferred (moved), the original binding is
invalidated -- using it is a compile error. References (borrows) follow strict rules: any
number of immutable references (&T) or exactly one mutable reference (&mut T), never
both simultaneously. The borrow checker enforces these rules at compile time.
Use-after-free is impossible because the compiler tracks lifetimes and rejects code where
a reference outlives the data it points to. Double-free is impossible because a value is
dropped exactly once when its owner goes out of scope. Data races are impossible because
the aliasing rules prevent shared mutable state across threads. The unsafe block opts
out of borrow checker enforcement for specific operations (raw pointer dereferencing, FFI
calls, inline assembly). Every unsafe block in a codebase should be audited for
correctness because the compiler cannot verify its safety invariants.
WebAssembly (WASM) as a sandboxing mechanism for native code: Compile C/C++ code to WebAssembly and execute it in a sandboxed runtime (Wasmtime, Wasmer, WasmEdge). WASM provides strong isolation: linear memory is a contiguous byte array that the module can access, but the module cannot access host memory outside this array. A buffer overflow in the WASM module is confined to the module's linear memory -- it cannot corrupt the host process. System calls are mediated through an explicit import/export interface (WASI for filesystem, network, environment). The module can only access capabilities explicitly granted by the host. This makes WASM an effective containment strategy for running untrusted or memory-unsafe code within a memory-safe host application.
The 70% statistic in full context: Microsoft and Google's data shows that 70% of critical and high-severity CVEs are memory safety bugs. This does not mean 70% of all bugs are memory-related -- it means 70% of the bugs that achieve the worst security outcomes (remote code execution, privilege escalation, arbitrary information disclosure) are memory corruption. Logic bugs, authentication bypasses, authorization flaws, injection vulnerabilities, and business logic errors constitute the remaining 30%. Switching to a memory-safe language eliminates the 70% but does not address the 30%. A comprehensive security posture requires both memory safety and secure application design.
"Our C/C++ code is carefully written, so memory safety is not a concern." Every large C/C++ codebase contains undiscovered memory safety bugs. The question is not whether bugs exist but whether they are found by the developer (via tooling and auditing) or by the attacker (via fuzzing and reverse engineering). Google, Microsoft, and Apple -- organizations with world-class C/C++ expertise -- still find hundreds of memory safety bugs per year. Use tooling (sanitizers, fuzzing, static analysis) and prefer memory-safe languages for new code.
Disabling compiler security flags for performance. Stack canaries
(-fstack-protector-strong) add a single comparison per function return. ASLR adds no
runtime overhead. _FORTIFY_SOURCE=2 replaces libc calls with bounds-checked versions
that have negligible overhead for non-trivial programs. Disabling these mitigations to
save microseconds creates exploitable vulnerabilities. Profile the actual performance
impact before considering removal -- in virtually all applications, it is unmeasurable.
Trusting data across FFI/native boundaries without validation. Passing
user-controlled strings, buffer lengths, or indices directly into native functions without
bounds checking. The memory-safe language provides safety within its own runtime, but
unsafe FFI calls bypass all guarantees. Treat every FFI call as crossing a trust
boundary: validate buffer sizes, check null pointers, clamp indices to valid ranges, and
handle native-side errors without crashing the host process.
Ignoring integer overflow in allocation size calculations.
size_t allocation_size = user_count * sizeof(struct Entry) overflows silently in C if
user_count is attacker-controlled and large enough. The resulting small allocation is
followed by a write that overflows the buffer. Use checked arithmetic (Rust:
checked_mul, Java: Math.multiplyExact, C: compiler builtins like
__builtin_mul_overflow) or validate that input values are within expected ranges before
performing arithmetic used in allocations.
"We use a memory-safe language, so memory safety is not our problem." Memory-safe languages frequently call into memory-unsafe native code. Python's most popular libraries (NumPy, Pillow, cryptography) are backed by C extensions. Node.js has native addons and ships with C++ bindings (libuv, V8). Java applications use JNI for performance-critical code and database drivers. Go uses cgo for C library integration. Every native boundary is a potential source of memory corruption. Audit native dependencies, prefer pure-language alternatives where they exist, and sandbox native components where they do not.
Running sanitizers only in development, not in CI. AddressSanitizer, MemorySanitizer, and UBSan find bugs that no amount of code review catches. Running them locally but not in CI means the test suite executes without sanitizer coverage on every PR merge. Configure CI to run the full test suite with sanitizers enabled on every commit. The 2x runtime overhead of ASan is acceptable for CI pipelines.
Treating fuzzing as a one-time activity. Running a fuzzer for a few hours, finding some bugs, fixing them, and never fuzzing again. Memory safety bugs are introduced continuously as code changes. Integrate continuous fuzzing (OSS-Fuzz, ClusterFuzz, or a self-hosted fuzzing cluster) into the development lifecycle. Fuzzing corpora grow over time, covering more code paths and finding deeper bugs.