Auto-activate for shared memory, ring buffer, SPSC/MPMC patterns. Zero-copy IPC patterns: shared memory regions, SPSC/MPMC ring buffers, platform sync primitives, notification mechanisms, and cross-process coordination. Use when implementing IPC primitives or high-performance data transfer. Not for network IPC (gRPC, REST) or message queues.
From flownpx claudepluginhub cofin/flow --plugin flowThis skill uses the workspace's default tool permissions.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
shm_open + mmap, Windows CreateFileMapping).pub struct ShmRegion {
ptr: *mut u8,
len: usize,
fd: OwnedFd, // RAII: closes on drop
}
impl ShmRegion {
pub fn create(name: &str, size: usize) -> Result<Self, IpcError> {
// SAFETY: shm_open + ftruncate + mmap is the standard POSIX pattern.
// We own the fd exclusively and unlink after mapping.
unsafe {
let fd = shm_open(name, O_CREAT | O_RDWR, 0o600)?;
ftruncate(fd, size as libc::off_t)?;
let ptr = mmap(
std::ptr::null_mut(),
size,
PROT_READ | PROT_WRITE,
MAP_SHARED,
fd,
0,
)?;
shm_unlink(name)?; // Unlink immediately — fd keeps it alive
Ok(Self { ptr: ptr.cast(), len: size, fd: OwnedFd(fd) })
}
}
pub fn as_slice(&self) -> &[u8] {
// SAFETY: ptr is valid for len bytes and region outlives self
unsafe { std::slice::from_raw_parts(self.ptr, self.len) }
}
}
impl Drop for ShmRegion {
fn drop(&mut self) {
// SAFETY: We own this mapping exclusively
unsafe { munmap(self.ptr.cast(), self.len) };
// fd closed by OwnedFd::drop
}
}
</example>
<guardrails>
sysconf(_SC_PAGESIZE)).#[repr(C, align(64))] // Cache-line aligned
pub struct SpscHeader {
write_pos: AtomicU64,
_pad1: [u8; 56], // Prevent false sharing
read_pos: AtomicU64,
_pad2: [u8; 56],
capacity: u64, // Must be power of two
}
impl SpscRing {
pub fn push(&self, data: &[u8]) -> Result<(), RingError> {
let write = self.header.write_pos.load(Ordering::Relaxed);
let read = self.header.read_pos.load(Ordering::Acquire);
let available = self.header.capacity - (write - read);
if data.len() as u64 > available {
return Err(RingError::Full);
}
let offset = (write % self.header.capacity) as usize;
// SAFETY: Bounds checked above, single writer
unsafe { self.write_at(offset, data) };
self.header.write_pos.store(write + data.len() as u64, Ordering::Release);
Ok(())
}
}
</example>
AtomicU64 sequence tag.Ordering::Acquire for reads, Ordering::Release for writes.Implement behind a trait for portability:
<example>pub trait Notifier: Send + Sync {
fn notify(&self) -> Result<(), IpcError>;
fn wait(&self, timeout: Option<Duration>) -> Result<(), IpcError>;
}
</example>
pub struct EventFdNotifier {
fd: OwnedFd,
}
impl Notifier for EventFdNotifier {
fn notify(&self) -> Result<(), IpcError> {
let val: u64 = 1;
// SAFETY: fd is valid, writing 8 bytes to eventfd
unsafe { libc::write(self.fd.0, &val as *const u64 as *const _, 8) };
Ok(())
}
fn wait(&self, timeout: Option<Duration>) -> Result<(), IpcError> {
// Use poll() with timeout, then read the eventfd
let mut buf: u64 = 0;
unsafe { libc::read(self.fd.0, &mut buf as *mut u64 as *mut _, 8) };
Ok(())
}
}
</example>
pipe() pair for simple notification (write 1 byte to wake).kqueue + EVFILT_USER for more advanced signaling.CreateEventW / SetEvent / WaitForSingleObject.Wrap ring buffers for async producers/consumers:
<example>pub struct AsyncRing {
ring: SpscRing,
notify: Arc<tokio::sync::Notify>,
}
impl AsyncRing {
pub async fn push(&self, data: &[u8]) -> Result<(), RingError> {
loop {
match self.ring.push(data) {
Ok(()) => {
self.notify.notify_one();
return Ok(());
}
Err(RingError::Full) => {
// Yield and retry
tokio::task::yield_now().await;
}
Err(e) => return Err(e),
}
}
}
pub async fn pop(&self, buf: &mut [u8]) -> Result<usize, RingError> {
loop {
match self.ring.pop(buf) {
Ok(n) => return Ok(n),
Err(RingError::Empty) => {
self.notify.notified().await;
}
Err(e) => return Err(e),
}
}
}
}
</example>
Use RAII guards for operations that need cleanup on scope exit:
<example>pub struct MappedGuard<'a> {
region: &'a ShmRegion,
offset: usize,
len: usize,
}
impl<'a> MappedGuard<'a> {
pub fn as_slice(&self) -> &[u8] {
&self.region.as_slice()[self.offset..self.offset + self.len]
}
}
impl<'a> Drop for MappedGuard<'a> {
fn drop(&mut self) {
// Release the region slice back to the pool
self.region.release(self.offset, self.len);
}
}
</example>
</workflow>
proptest for FIFO ordering, no-loss, and no-duplication invariants.loom for atomic ordering verification.cargo +nightly miri test.| Metric | Target |
|---|---|
| Shared memory latency | < 10 us |
| Ring buffer throughput | > 1M msg/sec |
| Zero-copy overhead | < 1 us per transfer |
Measure with criterion and record baselines before optimizing.
rustix over raw libc for cleaner POSIX bindings.