Harness for Claude Code — skills, /harness:* slash commands, persona subagents, lifecycle hooks, and MCP tools without per-repo `harness setup`. Sibling plugins exist for Cursor, Gemini CLI, and Codex.
npx claudepluginhub intense-visions/harness-engineering --plugin harness-claudeAdd a component to an existing harness project
Interactive architecture advisor that surfaces trade-offs and helps humans choose
Autonomous phase execution loop — chains planning, execution, verification, and review, pausing only at human decision points
Structured ideation and exploration with harness methodology
Execute a planned set of tasks with harness validation and state tracking
Detect and auto-fix dead code including dead exports, commented-out code, and orphaned dependencies
Multi-phase code review pipeline with mechanical checks, graph-scoped context, and parallel review agents
Orchestrate dead code removal and architecture violation fixes with shared convergence loop
5-phase post-mortem capture. Writes a structured solution doc at docs/solutions/{track}/{category}/{slug}.md with frontmatter, overlap-detection, and per-category lock for concurrency safety.
Systematic debugging with harness validation and state tracking
Analyze structural health of the codebase using graph metrics
Detect documentation that has drifted from code
Orchestrator composing 4 documentation skills into a sequential pipeline with convergence-based remediation and qualitative health reporting
Validate architectural layer boundaries, detect violations, and auto-fix import ordering and forbidden import replacement
Identify structural risk hotspots via co-change and churn analysis
Graph-based impact analysis — answers "if I change X, what breaks?"
Scaffold a new harness-compliant project, including design system and roadmap configuration
Scaffold or migrate a test-suite project (API, E2E/UI, or shared library) with test-suite-specific layer models, tags, reporter stack, and custom report
Verify system wiring, materialize knowledge artifacts, and update project metadata after execution
Unified integrity gate — chains verify (quick gate) with AI review into a single report
4-phase knowledge extraction, reconciliation, drift detection, and remediation with convergence loop
Onboard a new developer to a harness-managed project
Performance enforcement and benchmark management
Structured project planning with harness constraints and validation
First-run pulse interview. Converts intent into a validated pulse config with SMART pushback, read-write-DB rejection, STRATEGY.md seeding. Phase 3 ships the interview; the run path is deferred to Phase 4.
Safe refactoring with validation before and after changes
Audit npm release readiness, run maintenance checks, offer auto-fixes, track progress across sessions
AI-assisted selection of the next highest-impact roadmap item with scoring, assignment, and skill transition
Create and manage a unified project roadmap from existing specs and plans
Lightweight mechanical security scan for health checks
Create and maintain harness skills following the rich skill format
Deep soundness analysis of specs and plans with auto-fix and convergence loop
6-factor dependency risk evaluation for supply chain security
Test-driven development integrated with harness validation
Graph-based test selection — answers "what tests should I run?"
Comprehensive harness verification of project health and compliance
Binary pass/fail quick gate — runs test, lint, typecheck commands and returns structured result
Validate architectural constraints and dependency rules. Use when checking layer boundaries, detecting circular dependencies, or verifying import direction compliance.
Perform code review and address review findings using harness methodology. Use when reviewing code, fixing review findings, responding to review feedback, or when a code review has produced issues that need to be addressed.
Proactively identifies structural problems, coupling risks, and architectural drift
Detect and fix codebase entropy including drift, dead code, and pattern violations. Use when running cleanup, detecting dead code, or fixing pattern violations.
Keeps the knowledge graph fresh, monitors connector health, and ensures data quality
Dispatch independent tasks across isolated agents for parallel execution. Use when multiple independent tasks need to run concurrently, splitting work across agents, or coordinating parallel implementation.
Enforces performance budgets and detects regressions
Create detailed implementation plans from specs with task breakdown, dependency ordering, and checkpoint placement. Use when planning a phase, breaking a spec into tasks, or creating an execution plan.
Security-focused code reviewer with OWASP/CWE expertise
Execute implementation plans task-by-task with state tracking, TDD, and verification. Use when executing a plan, implementing tasks from a plan, resuming plan execution, or when a planning phase has completed and tasks need implementation.
Verify implementation completeness against spec and plan at three tiers (EXISTS, SUBSTANTIVE, WIRED). Use when checking if built code matches what was planned, validating phase completion, or auditing implementation quality.
> Apply ARIA roles, states, and properties correctly to enhance assistive technology support for custom widgets
> Ensure sufficient color contrast ratios and never use color as the sole means of conveying information
> Build accessible forms with proper labeling, grouped controls, inline validation, and clear error communication
> Write effective alt text for images and provide text alternatives for all non-text content
> Ensure all interactive elements are reachable and operable via keyboard alone without requiring a mouse
> Build accessible modal dialogs with focus trapping, escape dismissal, background inertness, and screen reader announcements
> Implement animations that respect user motion preferences, avoid seizure triggers, and provide pause controls
> Test web applications with screen readers to verify accessible navigation, announcements, and interaction patterns
> Use semantic HTML elements to convey document structure, meaning, and navigation landmarks to assistive technology
> Automate accessibility testing with axe-core, jest-axe, Playwright, and CI pipeline integration
> Add layers, documentation, components, or skills to an existing harness project with proper integration. Validate against existing constraints, wire into architecture, and verify the result.
> Sync documentation with code after implementation changes. Keep AGENTS.md, API docs, and architecture docs accurate by mapping code changes to their documentation impact.
> Author Angular components with correct inputs/outputs, change detection strategy, and lifecycle hooks
> Create attribute and structural directives with @Directive to add behavior, handle host events, and conditionally render DOM without modifying component templates
> Intercept HTTP requests and responses with HttpInterceptorFn for auth headers, retry logic, loading state, and centralized error handling
> Reduce initial bundle size with loadComponent, loadChildren, preloading strategies, and deferrable views (@defer)
> Optimize Angular rendering with OnPush change detection, trackBy, virtual scrolling, deferrable views, and signals for zoneless-ready apps
> Create custom Angular pipes for pure data transformation and use built-in pipes correctly to keep templates declarative and performant
> Build type-safe reactive forms with FormGroup, FormControl, Validators, and dynamic FormArrays
> Protect and preload routes with functional CanActivateFn, CanDeactivateFn, ResolveFn, and CanMatchFn guards
> Apply RxJS patterns correctly in Angular — switchMap for HTTP, takeUntilDestroyed for cleanup, async pipe for templates, and catchError for resilience
> Use ng generate, configure angular.json defaults, and author custom schematics for consistent code generation across a team
> Design Angular services with the right provider scope, injection tokens, and hierarchical injector strategy
> Manage reactive state with Angular Signals — signal(), computed(), effect(), and toSignal() — for fine-grained, zone-free reactivity
> Build module-free Angular apps with standalone: true, bootstrapApplication, and lazy-loaded standalone routes
> Manage application state with NgRx Store (Redux pattern) or NgRx SignalStore for signal-based state — choose the right tool for the complexity level
> Test Angular components, services, directives, and pipes with TestBed, ComponentFixture, fakeAsync, and service mocks
> API KEY DESIGN IS A SECURITY CONTRACT — ENTROPY REQUIREMENTS PREVENT BRUTE FORCE, SCOPING LIMITS BLAST RADIUS, HASHED STORAGE PREVENTS DATABASE DUMPS FROM BECOMING CREDENTIAL LISTS, AND ROTATION STRATEGY DETERMINES HOW QUICKLY A COMPROMISED KEY CAN BE CONTAINED.
> API AUTHENTICATION PATTERNS MAP CLIENT TYPE, TRUST LEVEL, AND TOKEN LIFETIME TO THE CORRECT CREDENTIAL SCHEME — CHOOSING THE WRONG MECHANISM (E.G., API KEYS FOR USER-DELEGATED ACCESS) INTRODUCES AUDIT GAPS, OVER-PRIVILEGED TOKENS, AND REVOCATION FAILURES THAT COMPROMISE EVERY API ENDPOINT DOWNSTREAM.
> BACKWARD COMPATIBILITY IS THE GUARANTEE THAT EXISTING CLIENTS CONTINUE TO WORK WITHOUT MODIFICATION AFTER AN API CHANGE — MASTERING THE ADDITIVE CHANGE RULES, POSTEL'S LAW, AND BREAKING CHANGE TAXONOMY LETS TEAMS EVOLVE APIS RAPIDLY WITHOUT FORCING CONSUMERS INTO LOCKSTEP MIGRATIONS.
> BULK ENDPOINTS AMORTIZE PER-REQUEST OVERHEAD ACROSS MANY OPERATIONS IN A SINGLE HTTP CALL — BUT THE REAL DESIGN CHALLENGE IS WHAT TO RETURN WHEN SOME OPERATIONS SUCCEED AND OTHERS FAIL: TRANSACTIONAL SEMANTICS ROLL BACK EVERYTHING ON ANY ERROR WHILE BEST-EFFORT SEMANTICS COMMIT SUCCESSES AND REPORT FAILURES INDIVIDUALLY, AND CHOOSING THE WRONG MODEL PRODUCES SILENT DATA CORRUPTION OR WASTED RETRIES.
> CONDITIONAL REQUESTS LET CLIENTS MAKE HTTP REQUESTS CONTINGENT ON RESOURCE STATE — PREVENTING REDUNDANT TRANSFERS WITH 304 NOT MODIFIED AND ENABLING OPTIMISTIC CONCURRENCY CONTROL WITH 412 PRECONDITION FAILED. WITHOUT CONDITIONAL REQUESTS, EVERY GET TRANSFERS THE FULL BODY AND EVERY PUT RISKS OVERWRITING CONCURRENT CHANGES.
> CONTENT NEGOTIATION IS THE HTTP MECHANISM BY WHICH CLIENTS AND SERVERS AGREE ON THE FORMAT, LANGUAGE, AND ENCODING OF A RESPONSE — ENABLING A SINGLE ENDPOINT TO SERVE JSON, XML, CSV, OR VERSIONED MEDIA TYPES WITHOUT SEPARATE URLS. IGNORING CONTENT NEGOTIATION FORCES VERSIONING THROUGH URLS OR QUERY PARAMETERS AND MAKES FORMAT DISCOVERY OPAQUE.
> CONSUMER-DRIVEN CONTRACT TESTING INVERTS THE TRADITIONAL INTEGRATION TEST — EACH CONSUMER PUBLISHES THE EXACT SHAPE IT EXPECTS, THE PROVIDER VERIFIES AGAINST EVERY CONSUMER'S CONTRACT IN CI, AND A BREAKING CHANGE IS CAUGHT THE MOMENT IT IS INTRODUCED INTO THE PROVIDER CODEBASE RATHER THAN DISCOVERED AT DEPLOYMENT TIME WHEN ROLLING BACK COSTS HOURS.
> DEPRECATION STRATEGY DEFINES THE STRUCTURED PROCESS OF RETIRING API VERSIONS AND ENDPOINTS — USING SUNSET AND DEPRECATION HEADERS, MIGRATION GUIDES, AND COMMUNICATION CADENCE TO MOVE CONSUMERS FORWARD WITHOUT SURPRISE OUTAGES OR BROKEN INTEGRATIONS.
> ERROR CONTRACTS DEFINE THE MACHINE-READABLE STRUCTURE, HUMAN-READABLE MESSAGE, AND ACTIONABLE REMEDIATION FOR EVERY FAILURE MODE — CONSISTENT ERROR RESPONSE DESIGN LETS CLIENTS HANDLE ERRORS PROGRAMMATICALLY WITHOUT PARSING FREE-TEXT OR REVERSE-ENGINEERING FAILURE SEMANTICS.
> FIELD SELECTION LETS CLIENTS REQUEST ONLY THE RESPONSE PROPERTIES THEY NEED — REDUCING PAYLOAD SIZE, ELIMINATING OVER-FETCHING, AND CUTTING BOTH BANDWIDTH AND SERIALIZATION COST ON THE SERVER WITHOUT REQUIRING A SEPARATE GRAPHQL LAYER OR BESPOKE NARROW ENDPOINTS.
> WELL-DESIGNED FILTER AND SORT PARAMETERS GIVE CLIENTS PRECISE CONTROL OVER RESULT SETS WITHOUT REQUIRING BESPOKE ENDPOINTS — BUT POORLY DESIGNED QUERY PARAMETERS INVITE INJECTION ATTACKS, PRODUCE UNINDEXED TABLE SCANS, AND LEAK INTERNAL SCHEMA DETAILS THAT BECOME BREAKING CHANGES WHEN THE DATA MODEL EVOLVES.
> Hypermedia As The Engine Of Application State (HATEOAS) embeds links to available next actions in every API response. Clients navigate the API by following links rather than constructing URLs — making the API self-describing and decoupling clients from URL structure.
> HTTP CACHING IS A FIRST-CLASS PERFORMANCE MECHANISM BUILT INTO THE HTTP PROTOCOL — CORRECT CACHE-CONTROL DIRECTIVES AND ETAG GENERATION CAN ELIMINATE REDUNDANT NETWORK ROUND-TRIPS AND ORIGIN LOAD BY ORDERS OF MAGNITUDE. MISCONFIGURED CACHING EITHER SERVES STALE DATA OR PREVENTS CACHING ENTIRELY, WASTING INFRASTRUCTURE AND LATENCY.
> HTTP METHODS ARE THE VERBS OF THE WEB — EACH METHOD CARRIES DEFINED SEMANTICS FOR SAFETY AND IDEMPOTENCY THAT ENABLE CACHING, SAFE RETRIES, AND CORRECT CLIENT BEHAVIOR. CHOOSING THE WRONG METHOD BREAKS THESE CONTRACTS AND FORCES CLIENTS TO GUESS AT SIDE EFFECTS.
> IDEMPOTENCY KEYS ARE A SAFETY CONTRACT — THEY ALLOW CLIENTS TO SAFELY RETRY FAILED OR AMBIGUOUS REQUESTS WITHOUT RISK OF DUPLICATE SIDE EFFECTS, AND THE DIFFERENCE BETWEEN AT-LEAST-ONCE AND EXACTLY-ONCE SEMANTICS IS ENTIRELY DETERMINED BY WHETHER THE SERVER STORES AND ENFORCES IDEMPOTENCY KEY UNIQUENESS WITHIN THE CONFIGURED TTL WINDOW.
> LONG-RUNNING OPERATIONS REQUIRE AN EXPLICIT ASYNC CONTRACT — A 202 ACCEPTED RESPONSE WITH AN OPERATION RESOURCE THAT CLIENTS CAN POLL OR SUBSCRIBE TO IS THE DIFFERENCE BETWEEN A SYNCHRONOUS BOTTLENECK THAT TIMES OUT UNDER LOAD AND A SCALABLE PATTERN WHERE SERVERS PROCESS WORK INDEPENDENTLY OF CLIENT CONNECTION LIFETIME.
> Nested URLs express ownership hierarchies; flat URLs with query parameters express arbitrary membership or filtering. The decision affects URL stability, caching, access control, and client complexity.
> OAUTH2 FLOW SELECTION IS DETERMINED BY CLIENT TYPE AND DEPLOYMENT ENVIRONMENT — AUTHORIZATION CODE + PKCE FOR USER-FACING APPS, CLIENT CREDENTIALS FOR MACHINE-TO-MACHINE, DEVICE CODE FOR BROWSERLESS CLIENTS — USING THE WRONG FLOW INTRODUCES CREDENTIAL EXPOSURE VECTORS THAT THE CORRECT FLOW ARCHITECTURALLY ELIMINATES.
> CONTRACT-FIRST OPENAPI 3.1 DESIGN TREATS THE SPECIFICATION AS THE SINGLE SOURCE OF TRUTH — SCHEMAS DEFINED ONCE IN COMPONENTS AND REFERENCED EVERYWHERE, DISCRIMINATORS THAT MAKE POLYMORPHISM EXPLICIT, AND OPERATION IDS THAT DRIVE CONSISTENT CODE GENERATION ACROSS EVERY CLIENT LANGUAGE — SO THE CONTRACT IS NEVER AN AFTERTHOUGHT BOLTED ONTO A RUNNING SERVER.
> CURSOR-BASED PAGINATION REPLACES NUMERIC OFFSETS WITH OPAQUE POSITION TOKENS — EACH CURSOR ENCODES EXACTLY WHERE THE CLIENT LEFT OFF, ELIMINATING PAGE DRIFT WHEN ROWS ARE INSERTED OR DELETED BETWEEN REQUESTS AND ENABLING CONSISTENT TRAVERSAL OF LIVE DATASETS.
> KEYSET PAGINATION NAVIGATES LARGE RESULT SETS BY REMEMBERING THE LAST SEEN ROW'S SORT KEY RATHER THAN COUNTING SKIPPED ROWS — EACH PAGE QUERY BECOMES AN EFFICIENT INDEX SEEK THAT PERFORMS IDENTICALLY WHETHER YOU ARE ON PAGE 1 OR PAGE 1,000,000, MAKING IT THE ONLY PAGINATION STRATEGY THAT SCALES RELIABLY BEYOND TEN MILLION ROWS.
> OFFSET/LIMIT PAGINATION IS THE SIMPLEST PAGINATION MODEL BUT CARRIES HIDDEN COSTS — COUNT(\*) QUERIES ARE EXPENSIVE ON LARGE TABLES, CONCURRENT INSERTS AND DELETES CAUSE PAGE DRIFT THAT DUPLICATES OR SKIPS ROWS, AND HIGH OFFSETS FORCE FULL INDEX SCANS THAT MAKE DEEP PAGES ORDERS OF MAGNITUDE SLOWER THAN PAGE ONE.
> RFC 9457 PROBLEM DETAILS IS THE IETF STANDARD FOR HTTP API ERROR RESPONSES — USING application/problem+json WITH type, title, status, detail, AND instance FIELDS GIVES EVERY ERROR A MACHINE-READABLE URI, A STABLE HUMAN-READABLE TITLE, AND A LINKABLE DOCUMENTATION TARGET WITHOUT INVENTING A PROPRIETARY ERROR ENVELOPE.
> RATE LIMIT HEADERS ARE THE CONSUMER'S INSTRUMENTATION — WITHOUT X-RATELIMIT-REMAINING AND X-RATELIMIT-RESET, CLIENTS CANNOT IMPLEMENT PROACTIVE THROTTLING AND ARE FORCED INTO REACTIVE RETRY LOOPS THAT AMPLIFY LOAD ON ALREADY-STRESSED INFRASTRUCTURE PRECISELY WHEN THE API NEEDS RELIEF MOST.
> RATE LIMIT DESIGN IS A CONSUMER CONTRACT — QUOTA TIERS, BURST ALLOWANCES, AND FAIR-USE POLICIES SET EXPECTATIONS THAT CLIENTS DEPEND ON FOR SLA PLANNING, AND SILENT THROTTLING WITHOUT CLEAR LIMITS FORCES CONSUMERS TO GUESS WHAT BEHAVIOR IS SAFE, PRODUCING FRAGILE INTEGRATIONS THAT FAIL UNPREDICTABLY UNDER LOAD.
> Resource granularity determines how much data a single API resource exposes. Fine-grained resources are flexible but chatty; coarse-grained resources reduce round trips but over-fetch. The right granularity matches your clients' actual access patterns.
> REST APIs are organized around resources — nouns that represent things, not verbs that represent actions. Well-modeled resources produce URLs that are predictable, cacheable, and easy to understand without documentation.
> The Richardson Maturity Model grades REST APIs on a four-level scale — from RPC-over-HTTP tunneling (Level 0) to full hypermedia controls (Level 3). Each level adds constraints that improve discoverability, cacheability, and client-server decoupling.
> RETRY GUIDANCE SIGNALS CLIENTS WHEN AND HOW TO RETRY FAILED REQUESTS — CLASSIFYING ERRORS AS TRANSIENT OR PERMANENT, EMITTING Retry-After HEADERS, AND REQUIRING IDEMPOTENCY FOR SAFE RETRIES PREVENTS BOTH THUNDERING-HERD AMPLIFICATION AND UNNECESSARY REQUEST ABANDONMENT UNDER TEMPORARY LOAD.
> AN ERGONOMIC SDK REMOVES EVERY DECISION A DEVELOPER SHOULD NOT HAVE TO MAKE — METHOD NAMES THAT READ AS SENTENCES, PAGINATION THAT ITERATES WITHOUT MANUAL CURSOR MANAGEMENT, TYPED EXCEPTIONS THAT DISTINGUISH RETRIABLE ERRORS FROM PERMANENT ONES, AND RETRY LOGIC BUILT IN BY DEFAULT — SO THE DEVELOPER'S FIRST WORKING INTEGRATION TAKES MINUTES AND THE SDK STAYS INVISIBLE IN PRODUCTION.
> HTTP STATUS CODES ARE THE RESPONSE CONTRACT BETWEEN SERVER AND CLIENT — CORRECT CODE SELECTION ENABLES ERROR HANDLING, RETRY LOGIC, AND MONITORING WITHOUT PARSING RESPONSE BODIES. MISUSING STATUS CODES FORCES CLIENTS TO TREAT 200 OK AS AN AMBIGUOUS SIGNAL THAT MUST BE INSPECTED FOR HIDDEN FAILURES.
> FIELD-LEVEL VALIDATION ERROR DESIGN — RETURNING ALL VALIDATION FAILURES IN A SINGLE RESPONSE WITH JSON POINTER PATHS AND PER-FIELD MESSAGES ELIMINATES THE ONE-ERROR-AT-A-TIME DEBUGGING LOOP AND GIVES CLIENTS ENOUGH INFORMATION TO HIGHLIGHT EVERY INVALID FIELD WITHOUT A SECOND REQUEST.
> HEADER VERSIONING NEGOTIATES API VERSION THROUGH HTTP HEADERS RATHER THAN URI PATHS — IT KEEPS URIS CLEAN AND RESOURCE-CENTRIC WHILE ENABLING FINE-GRAINED BEHAVIORAL VERSIONING, VENDOR MEDIA TYPES, AND CONTENT-TYPE-LEVEL DIFFERENTIATION WITHOUT PROLIFERATING PATH PREFIXES.
> URL PATH VERSIONING EMBEDS THE API VERSION DIRECTLY IN THE URI (/v1/, /v2/) — IT TRADES CLEAN URI SEMANTICS FOR MAXIMUM VISIBILITY AND CACHEABILITY, MAKING IT THE DEFAULT CHOICE FOR PUBLIC APIs WHERE DEVELOPER EXPERIENCE AND REVERSE-PROXY ROUTING SIMPLICITY OUTWEIGH STRICT REST PURITY.
> WEBHOOKS ARE A PUSH-BASED CONTRACT — REGISTRATION, PAYLOAD SCHEMA, DELIVERY GUARANTEES, AND RETRY POLICIES ARE ALL CONSUMER-FACING COMMITMENTS THAT DETERMINE WHETHER INTEGRATIONS REMAIN RELIABLE UNDER PARTIAL FAILURES, AND DESIGNING THESE PROPERTIES EXPLICITLY UPFRONT PREVENTS AN AD-HOC SYSTEM THAT FAILS SILENTLY AT THE WORST POSSIBLE MOMENT.
> WEBHOOK SECURITY IS A RECEIVER-SIDE RESPONSIBILITY — SIGNATURE VERIFICATION, TIMESTAMP VALIDATION, AND SECRET ROTATION ARE THE THREE CONTROLS THAT PREVENT SPOOFED DELIVERIES, REPLAY ATTACKS, AND CREDENTIAL EXPOSURE, AND OMITTING ANY ONE OF THEM CREATES AN EXPLOITABLE GAP EVEN IF THE OTHER TWO ARE CORRECTLY IMPLEMENTED.
> The `.astro` file format — server-only frontmatter, HTML-like templates, and scoped CSS — is the foundation of every Astro project.
> Type-safe, schema-validated content management built into Astro — no CMS required for structured Markdown, MDX, and data files.
> Configure adapters, environment variables, and build output for deploying Astro to Vercel, Node.js, Cloudflare, and Netlify.
> Use `astro:assets` and the `<Image />` component to automatically optimize images at build time — enforcing `alt` text, generating WebP/AVIF, and preventing layout shift.
> Build or consume Astro integrations to extend the build pipeline — add renderers, inject routes, modify Vite config, and hook into build lifecycle events.
> Ship zero JavaScript by default — hydrate only the interactive components that need it, exactly when they need it.
> Run React, Vue, Svelte, Solid, and Preact side-by-side in one Astro project with full framework isolation and shared reactive state via nanostores.
> File-based routing maps your `src/pages/` directory structure to URLs — static, dynamic, and rest-parameter routes all follow the same filesystem convention.
> Build REST API routes, webhooks, and form handlers inside your Astro project using `.ts` endpoint files and the middleware API.
> Control rendering mode at the project level and per-page — combine static pre-rendering with server-side dynamic pages using `output: 'hybrid'` and adapter configuration.
> Animate page navigations with native browser View Transitions API, persist interactive islands across pages, and hook into the transition lifecycle.
> Run all mechanical constraint checks: linter rules, boundary schemas, and forbidden imports. These are automated, enforceable rules — if it can be checked by a machine, it must be.
> Entropy analysis and safe cleanup. Find unused exports, dead files, and pattern violations — then remove them without breaking anything.
> Create performant CSS animations with Tailwind transitions, keyframe utilities, and motion-safe considerations
> Build type-safe component variants with class-variance-authority for consistent, composable styling APIs
> Scope CSS to components with CSS Modules for collision-free class names and co-located styles
> Build reusable styled components with Tailwind, CVA variants, and polymorphic prop patterns
> Implement dark mode with Tailwind's dark variant, CSS custom properties, and user preference detection
> Define and manage design tokens for colors, spacing, typography, and effects in Tailwind CSS
> Style accessible headless components from Radix UI and Headless UI with Tailwind data-attribute selectors
> Build common layouts with Tailwind flexbox and grid utilities for dashboard, marketing, and app shells
> Optimize CSS performance with content-visibility, containment, efficient selectors, and Core Web Vitals-friendly patterns
> Build responsive layouts with Tailwind's mobile-first breakpoints, container queries, and fluid typography
> Resolve Tailwind class conflicts intelligently with tailwind-merge for safe className composition and overrides
> Apply Tailwind CSS utility-first patterns for consistent, maintainable component styling
> The mechanisms that make ACID guarantees real: Write-Ahead Logging ensures atomicity and durability, fsync ensures persistence to physical media, and crash recovery replays the WAL to restore a consistent state.
> ACID guarantees that database transactions are processed reliably: each transaction is all-or-nothing (Atomic), leaves the database in a valid state (Consistent), operates as if no other transactions are running (Isolated), and once committed, persists even through crashes (Durable).
> The simplest hierarchical model where each row stores a reference to its parent, traversed with recursive CTEs for subtree and ancestor queries.
> Recording who changed what, when, and why, using trigger-based or application-level change tracking with immutable append-only logs.
> The default index type in PostgreSQL and MySQL, B-tree indexes support equality and range queries on ordered data with O(log n) lookup performance.
> In a distributed system, when a network partition occurs, you must choose between consistency (every read returns the most recent write) and availability (every non-failing node returns a response) -- you cannot have both simultaneously.
> Storing all ancestor-descendant pairs in a separate table for O(1) subtree and ancestor lookups with manageable write costs.
> Multi-column indexes that accelerate queries filtering on column combinations, governed by the leftmost prefix rule and the ESR (Equality, Sort, Range) column ordering strategy.
> External connection poolers like PgBouncer sit between the application and database, multiplexing many application connections onto fewer database connections to prevent connection exhaustion.
> Tuning max_connections, understanding per-connection memory overhead, and right-sizing database connections for on-premise and serverless environments.
> Indexes that contain all columns needed by a query, enabling index-only scans that skip heap table access entirely.
> Deadlocks occur when two or more transactions hold locks and each waits for a lock the other holds; prevention through consistent lock ordering and detection through timeout-based abort resolves them.
> Intentionally introducing controlled redundancy into a normalized schema to eliminate expensive joins or aggregations, applied only after measured proof of a performance problem.
> Using JSONB columns to store semi-structured data alongside relational tables, with indexing strategies and guidelines for when to embed vs normalize.
> A schema pattern for storing dynamic, user-defined attributes as rows instead of columns -- usually avoided in favor of JSONB or polymorphic alternatives, but occasionally justified for genuinely unbounded attribute sets.
> If no new updates are made, all replicas will eventually converge to the same value -- a consistency model that trades immediate agreement for higher availability and lower latency.
> Add new structure, migrate data, remove old structure -- the three-phase pattern for safe column renames, type changes, and table restructuring.
> How to read query execution plans to identify performance bottlenecks, row count misestimations, and missing indexes.
> Indexes on computed expressions and specialized index types (GIN, GiST) for non-scalar data like JSONB, arrays, and full-text search.
> Every column holds a single atomic value, no repeating groups exist, and every row is uniquely identifiable by a primary key.
> Modeling vertices and edges in SQL tables for social graphs, dependency networks, and recommendation systems with recursive queries -- and knowing when SQL stops being practical.
> Optimized for equality-only lookups with O(1) average access time, hash indexes are smaller than B-tree when range queries and ordering are never needed.
> Choosing between adjacency list, nested sets, closure table, and materialized path based on read/write ratio, query patterns, and tree depth.
> Distributing rows of a table across multiple database instances (shards) to scale beyond the capacity of a single server, with careful attention to shard key selection and cross-shard query complexity.
> The four SQL standard isolation levels control which concurrent transaction side-effects are visible, with PostgreSQL implementing them via MVCC snapshots rather than traditional locking.
> Selecting the right isolation level requires matching the workload's correctness requirements against the performance cost and retry complexity of stricter levels.
> Forward-only vs reversible migrations, data backfill safety, and blue-green schema patterns for confident schema evolution.
> MVCC allows readers and writers to operate concurrently without blocking each other by maintaining multiple versions of each row, with visibility determined by transaction snapshots.
> Encoding hierarchy position with left/right boundary numbers for O(1) subtree and ancestor queries at the cost of expensive writes.
> Optimistic locking assumes conflicts are rare, allows concurrent reads without locks, and detects conflicts at write time using version columns or conditional updates.
> Indexes with a WHERE clause that index only a subset of rows, reducing size and improving performance for targeted query patterns.
> Pessimistic locking acquires locks before modifying data, guaranteeing exclusive access and preventing conflicts at the cost of reduced concurrency.
> Modeling inheritance hierarchies and type-varying relationships in relational databases using single-table inheritance (STI), class-table inheritance (CTI), or shared foreign key patterns.
> Structural query transformations that help the planner choose better execution plans without changing results.
> How the planner uses table statistics (pg_stats, histograms, most-common-values) to estimate row counts and choose execution plans.
> The SQL standard defines three read anomalies (dirty, non-repeatable, phantom) that isolation levels progressively prevent, plus PostgreSQL adds write skew as a fourth anomaly relevant to Serializable.
> Understanding when the planner chooses sequential scan, index scan, bitmap scan, or index-only scan and why each is optimal for different selectivity ranges.
> Every non-key column must depend on the entire composite primary key, not just part of it -- eliminating partial dependencies.
> Splitting a large table into smaller physical partitions by range, list, or hash to improve query performance, simplify maintenance, and enable efficient data lifecycle management.
> Modeling when facts are true (valid-time), when they were recorded (transaction-time), or both (bitemporal), enabling time-travel queries and regulatory audit.
> "Every non-key attribute must provide a fact about the key, the whole key, and nothing but the key." -- Codd's memorable definition of full normalization through 3NF.
> Designing append-heavy tables for metrics, events, and logs with time-based partitioning, retention policies, and efficient aggregation.
> Splitting a wide table into multiple narrower tables, separating hot columns from cold columns, and managing large objects with TOAST to reduce I/O and improve cache efficiency.
> Online schema changes that avoid table locks and keep the application serving traffic throughout the migration.
> Perceived actionability — signifiers, constraints, mappings (Don Norman), flat design's affordance problem, touch targets, hover states as affordance
> Visual order through edge alignment, center alignment, optical alignment, and the invisible structure that consistent alignment creates across a page
> Apple's design philosophy covering clarity/deference/depth, vibrancy and material effects, SF Symbols integration, semantic color system, safe area management, and platform-specific navigation patterns across iOS, iPadOS, macOS, watchOS, and visionOS.
> Composition methodology for building design systems using five distinct levels of abstraction: atoms, molecules, organisms, templates, and pages.
> Visual coherence across every touchpoint — mapping brand attributes to design decisions, voice-to-visual translation, consistency vs. monotony, brand flex zones, and multi-platform coherence
> Color independence — conveying information without relying on color alone, building colorblind-safe palettes, and ensuring perceptual uniformity across all vision types
> Color wheel relationships — complementary, analogous, triadic, split-complementary, tetradic schemes with usage guidance for building cohesive palettes
> Emotional and cultural associations of color — warmth/coolness, trust, urgency, industry conventions, and cultural variance for global product design
> Anatomy of reusable components covering slots, variants, states, sizes, composition vs configuration, compound components, and when to split vs merge.
> Internal vs. external consistency — maintaining coherent patterns within a product, adhering to platform conventions, and knowing when to break consistency deliberately
> Information density as a deliberate design variable — compact, comfortable, and spacious modes, matching density to user context, and the tradeoff between showing more and showing clearly
> Luminance contrast for readability and visual weight — WCAG ratios, contrast as a hierarchy tool, contrast beyond accessibility
> Color adaptation for dark themes — inverted hierarchy, reduced saturation, elevation through lightness, surface layering, and maintaining brand identity in dark contexts
> Data visualization principles — chart type selection, color encoding, annotation strategy, Tufte's data-ink ratio, accessible charts, avoiding chartjunk, and small multiples for comparison
> Evaluating existing design — heuristic evaluation (Nielsen's 10), consistency inventory, accessibility audit, competitive analysis, identifying and quantifying design debt
> Structured feedback — critique frameworks (like/wish/wonder, what/why/improve), separating subjective preference from objective assessment, avoiding "I don't like it"
> Documenting design decisions — design rationale, spec handoff, annotating designs, living documentation, decision logs, the DESIGN.md format
> Living system maintenance covering contribution models, deprecation processes, versioning strategies, adoption metrics, and documentation standards for treating a design system as a product.
> Depth as information — shadow anatomy (offset, blur, spread, color), elevation scale, chromatic shadows, material metaphor, dark mode shadows
> Empty and error state design — empty states as onboarding, error states as recovery, 404 pages, zero-data states, degraded states, constructive error messages
> System response design — immediate vs delayed feedback, optimistic updates, progress indicators, confirmation patterns, undo vs confirm, toast/snackbar/banner
> Microsoft's cross-platform design system covering the five foundational elements (light, depth, motion, material, scale), Acrylic and Mica materials, reveal highlight interactions, connected animations, responsive container strategies, and the Fluent 2 token theming architecture.
> Combining typefaces — contrast principles, superfamilies, serif+sans pairing rules, pairing by x-height and proportion, limiting to 2-3 families
> Form design beyond labels — progressive disclosure, inline validation timing, smart defaults, forgiving formats, single-column superiority, error recovery
> Pattern completion — the brain fills gaps in incomplete shapes (closure) and follows smooth paths over abrupt changes (continuity), with implications for icons, progress indicators, and visual flow
> Motion grouping — elements that move or change together are perceived as a unit, with implications for animation, loading states, and batch operations
> Depth perception — distinguishing foreground from background, ambiguous figure-ground as a design tool, z-axis ordering, overlay and modal perception
> Spatial grouping — elements near each other are perceived as related, controlling group membership through distance, with common region as a proximity amplifier
> Visual kinship — elements sharing color, size, shape, or texture are perceived as related, creating categories without explicit labels
> Grid theory — column, modular, baseline, and compound grids, gutter rhythm, and breaking the grid intentionally for emphasis
> Designing for internationalization — text expansion, RTL layout, icon cultural sensitivity, date/number/currency formatting, pseudolocalization testing
> Icon design principles — optical sizing, stroke consistency, pixel grid alignment, metaphor clarity, icon families, filled vs outlined states, and icon as a systematic visual language
> Illustration system design — style consistency, spot vs. hero illustrations, illustration as brand voice, abstract vs. representational choices, and illustration tokens for systematic production
> Image in design — art direction, aspect ratios, focal point composition, image treatments (duotone, overlay, blur), placeholder strategy, and the role of image as hero vs. supporting element
> Structuring information — card sorting, tree testing, mental models, labeling systems, organization schemes, findability
> Perceived performance — skeleton screens, progressive loading, optimistic rendering, shimmer effects, content-first loading, perceived vs actual speed
> Google's adaptive design language covering dynamic color extraction from wallpaper, HCT-based tonal palettes, elevation through tonal surface color rather than drop shadows, shape theming with corner families, and choreographed motion with shared-axis transitions.
> Small moments that delight — trigger, rules, feedback, loops/modes (Dan Saffer's framework), when micro-interactions aid usability vs decoration
> Purposeful animation — Disney's 12 principles adapted for UI, easing curves, duration guidelines, choreography, motion as feedback vs decoration, reducing motion
> Design system nomenclature for semantic and descriptive names, color naming, size naming, and maintaining consistent vocabulary across design and code.
> Wayfinding — navigation models (hub-spoke, hierarchy, flat, content-driven), persistent vs contextual nav, breadcrumbs, information scent
> Building functional palettes — primary/secondary/accent selection, neutral scales, semantic colors, and tint/shade generation for production design systems
> Scroll-driven depth — rate-differential parallax, scroll-triggered reveals, sticky sections, scroll narrative, performance constraints, motion sensitivity
> Optimizing for reading — line length (measure), line height (leading), paragraph spacing, text alignment, hyphenation, and reading patterns (F-pattern, Z-pattern)
> Responsive as a design decision — content-first breakpoints, progressive disclosure, and treating every viewport as a first-class design target
> Type across viewports — fluid typography with CSS clamp(), viewport-relative scaling, minimum readable sizes, and maintaining hierarchy across breakpoints
> UI state inventory — empty, loading, partial, error, success, offline, disabled, read-only, and how each state communicates system status
> Token taxonomy covering primitive, semantic, and component tokens with naming conventions, aliasing chains, theme switching, and the token-to-code pipeline.
> Temporal design — enter/exit asymmetry, stagger patterns, easing functions (ease-out for enter, ease-in for exit), duration by element size, interruptibility
> Mathematical type scales — modular, major third, perfect fourth, golden ratio, custom scales, and when each is appropriate
> Establishing reading order through type — size, weight, color, spacing, case, and position as hierarchy signals
> Anatomy of type — x-height, ascenders, descenders, counters, serifs, stroke contrast, optical sizing, and how anatomy affects readability
> Directing attention through size, color, contrast, position, isolation, and motion — the system that tells the eye where to go first, second, and third
> Font loading strategy — performance vs. FOUT/FOIT, variable fonts, subsetting, system font stacks, and font-display options
> Space as a design element — macro vs. micro whitespace, breathing room, density control, and whitespace as a signal of quality and luxury
> Detect documentation that has drifted from code. Find stale docs before they mislead developers and AI agents.
> Filter Drizzle queries with eq(), and(), or(), between(), sql template tag, and custom conditions
> Manage Drizzle schema evolution with drizzle-kit generate/push/migrate and introspect
> Optimize Drizzle queries with prepared statements, db.batch(), explain analysis, and join-based N+1 avoidance
> Compose type-safe SQL with Drizzle's fluent query builder for select, insert, update, and delete
> Execute raw SQL safely in Drizzle with the sql template tag, db.execute(), and placeholder()
> Define Drizzle relations with relations(), one(), many(), references(), and inferred types
> Define Drizzle ORM schemas with pgTable/mysqlTable/sqliteTable, column types, indexes, and constraints
> Execute atomic Drizzle operations with db.transaction(), nested transactions, and rollback semantics
> Integrate Drizzle with Next.js using Neon/Vercel Postgres, edge runtime, and connection pooling
> Validate architectural layer boundaries and detect dependency violations. No code may violate layer constraints — this is a hard gate, not a suggestion.
> Define and evolve event schemas using a schema registry with Avro, Protobuf, or JSON Schema.
> Run event storming workshops to discover domain events, commands, and bounded contexts.
> Handle duplicate message delivery safely using idempotency keys and deduplication stores.
> Produce and consume Kafka messages with partitioning, consumer groups, and offset management.
> Use message queues for reliable async delivery with competing consumers and dead letter queues.
> Reliably publish domain events using the transactional outbox and CDC polling approach.
> Implement publisher-subscriber communication with topic-based routing and fan-out delivery.
> Use Redis pub/sub channels and keyspace notifications for lightweight real-time messaging.
> Coordinate distributed workflows through event chains and compensation events without an orchestrator.
> Stream one-way server events to browsers using Server-Sent Events and EventSource.
> Implement reliable webhook delivery with retry backoff, signature verification, and queuing.
> Implement bidirectional real-time communication using WebSocket protocol and Socket.io.
> Create families of related objects through factory interfaces without coupling to concrete types.
> Wrap incompatible interfaces to make them work together without modifying source code.
> Separate abstraction from implementation to allow them to vary independently.
> Construct complex objects step-by-step using fluent builders and director classes.
> Pass requests along a handler chain with short-circuit and async chain support.
> Encapsulate operations as command objects to support undo, redo, and command queuing.
> Compose objects into tree structures and treat individual and composite objects uniformly.
> Attach additional behavior to objects at runtime by wrapping them in decorator objects.
> Provide a simplified interface to a complex subsystem to reduce coupling for clients.
> Define a factory interface that subclasses use to decide which object to instantiate.
> Share fine-grained objects to reduce memory usage by separating intrinsic and extrinsic state.
> Traverse collections with Symbol.iterator and generators for lazy, composable sequences.
> Decouple components by routing communication through a central mediator or event bus.
> Capture and restore object state using mementos for undo history and time-travel.
> Eliminate null checks by providing default no-op implementations of interfaces.
> Implement push-based notification between Subject and Observer with typed subscriptions.
> Clone objects using prototype registry and structured clone for deep copy scenarios.
> Control access to an object using virtual, protection, logging, and caching proxy patterns.
> Ensure a class has exactly one instance using module-level singletons and WeakRef patterns.
> Replace conditional logic with state objects that delegate behavior to the current state.
> Encapsulate interchangeable algorithms behind a common interface for runtime selection.
> Define an algorithm skeleton in a base class with abstract steps filled by subclasses.
> Add operations to object structures without modifying them using double dispatch.
> Configure and run Apollo Server with plugins, context, data sources, and framework integrations
> Implement authentication and authorization in GraphQL with context-based identity, directives, and field-level guards
> Structure GraphQL client code with fragments, cache normalization, and optimistic updates for responsive UIs
> Generate type-safe TypeScript code from GraphQL schemas and operations to eliminate manual type maintenance
> Batch and cache data fetches to eliminate N+1 queries in GraphQL resolvers
> Handle errors in GraphQL APIs with structured error types, result unions, and server-side error formatting
> Compose a unified GraphQL API from independently deployed subgraph services using Apollo Federation
> Implement cursor-based and offset pagination in GraphQL using the Relay connection specification
> Optimize GraphQL API performance with query complexity analysis, caching, persisted queries, and DataLoader
> Implement resolvers with clean separation between data fetching, business logic, and response shaping
> Design expressive, evolvable GraphQL schemas with clear type hierarchies and strong nullability contracts
> Implement real-time data streaming with GraphQL subscriptions over WebSocket connections
> WCAG compliance verification and remediation. Scan components for accessibility violations, evaluate severity against design strictness, generate actionable reports, and apply automated fixes for mechanical issues.
> Advisory guide for REST, GraphQL, and gRPC API design. Produces OpenAPI specs, GraphQL schemas, or proto definitions with versioning strategies and consistency validation.
> Cognitive mode: **advisory-guide**. Ask questions, surface trade-offs, present options. Do NOT execute. The human decides; you inform the decision.
> OAuth2, JWT, RBAC/ABAC, session management, and MFA pattern analysis. Detects authentication and authorization mechanisms, evaluates security posture against OWASP guidelines, and recommends improvements for token lifecycle, permission models, and multi-factor authentication.
> Lightweight orchestrator — dispatches isolated phase-agents, tracks state, chains artifacts between phases. Delegates all planning/execution/verification/review to dedicated persona agents.
> Design exploration to spec to plan. No implementation before design approval. Think first, build second.
> Advisory guide for cache strategies, invalidation patterns, and distributed caching. Detects existing cache usage, analyzes access patterns, designs cache layers with proper invalidation, and validates consistency guarantees.
> Chaos engineering, fault injection, and resilience validation. Systematically introduces failures to verify that systems degrade gracefully, recover automatically, and maintain availability under real-world fault conditions.
> Multi-phase code review pipeline — mechanical checks, graph-scoped context, parallel review agents, cross-agent deduplication, and structured output with technical rigor over social performance.
> Orchestrate dead code removal and architecture violation fixes with a shared convergence loop. Catches cross-concern cascades that individual skills miss.
> SOC2, HIPAA, GDPR compliance checks, audit trails, and regulatory checklists. Scans codebases for compliance-relevant patterns, classifies data by sensitivity, audits implementation against framework-specific controls, and generates gap analysis reports with remediation plans.
> 5-phase post-mortem capture. When a problem is solved, distill it into a structured doc at `docs/solutions/<track>/<category>/<slug>.md` so the next person (or agent) finds the playbook before re-deriving it.
> Dockerfile review, Kubernetes manifest validation, and container optimization. Smaller images, safer containers, correct orchestration.
> Verify ETL/ELT pipeline quality, data contracts, idempotency, and test coverage. Analyzes DAG structure, transformation logic, and data quality checks across dbt, Airflow, Dagster, and Prefect pipelines.
> Meticulous verifier for schema validation, data contracts, and pipeline data quality. Detects validation libraries, audits trust boundaries for unvalidated inputs, enforces runtime validation schemas, and verifies type-runtime alignment.
> Advisory guide for schema design, migrations, ORM patterns, and migration safety. Detects your ORM, analyzes schema health, produces safe migrations, and validates backward compatibility.
> 4-phase systematic debugging with entropy analysis and persistent sessions. Phase 1 before ANY fix. "It's probably X" is not a diagnosis.
> Analyze structural health of the codebase and surface problems before they become incidents.
> CI/CD pipeline analysis, deployment strategy design, and environment management. From commit to production with confidence.
> Token-bound mobile component generation. Scaffold from design tokens and aesthetic intent, implement with React Native, SwiftUI, Flutter, or Compose patterns following platform-specific design rules, and verify every value references the token set with native convention compliance.
> Token-first design management. Discover existing design patterns, define intent through curated palettes and typography, generate W3C DTCG tokens, and validate every color pair for WCAG compliance.
> Token-bound web component generation. Scaffold from design tokens and aesthetic intent, implement with Tailwind/CSS and React/Vue/Svelte patterns, and verify every value references the token set — no hardcoded colors, fonts, or spacing.
> Aesthetic direction workflow. Capture design intent, generate DESIGN.md with anti-patterns and platform notes, review components against aesthetic guidelines, and enforce design constraints at configurable strictness levels.
> Cognitive mode: **diagnostic-investigator**. Classify errors into taxonomy categories and route to deterministic resolution strategies. Evidence first, classification second, action third.
> Orchestrator composing 4 documentation skills into a sequential pipeline with convergence-based remediation, producing a qualitative documentation health report.
> Audit developer experience artifacts -- README quality, API documentation coverage, getting-started guides, and example code validation. Produces a structured DX scorecard with specific improvements and scaffolds missing documentation.
> End-to-end browser testing with Playwright, Cypress, or Selenium. Covers page object scaffolding, critical-path test implementation, and systematic flakiness remediation.
> Architectural guide for message queues, event sourcing, CQRS, and saga patterns. Maps event flows, designs topic topologies, validates delivery guarantees, and produces event catalog documentation.
> Execute a plan task by task with atomic commits, checkpoint protocol, and persistent knowledge capture. Stop on blockers. Do not guess.
> Flag lifecycle management, A/B testing infrastructure, and gradual rollout design. Ship features safely with controlled exposure and clean retirement.
> Worktree setup, dependency installation, baseline verification, and branch finishing. Clean isolation for every workstream.
> Identify modules that represent structural risk via co-change and churn analysis.
> Cognitive mode: **advisory-guide**. Inject i18n considerations into brainstorming, planning, and review workflows. Adapt enforcement based on project configuration — gentle prompts when unconfigured, gate-mode validation when enabled.
> Translation lifecycle management. Configure i18n settings, scaffold translation files, extract translatable strings, track coverage, generate pseudo-localization, and retrofit existing projects with internationalization support.
> Internationalization compliance verification. Detect hardcoded strings, missing translations, locale-sensitive formatting, RTL issues, and concatenation anti-patterns across web, mobile, and backend codebases.
> Graph-based impact analysis. Answers: "if I change X, what breaks?"
> Runbook generation, postmortem analysis, and SLO/SLA tracking. Diagnoses incidents by tracing symptoms through services, produces structured postmortems, and maintains error budget accounting.
> Terraform, CloudFormation, and Pulumi analysis. Module structure, state management, drift prevention, and security posture for infrastructure definitions.
> Service boundary testing, API contract verification, and consumer-driven contract validation. Ensures services communicate correctly without requiring full end-to-end infrastructure.
> Verify system wiring, materialize knowledge artifacts, and update project metadata. Integration is a gate, not a discovery phase -- it confirms that planned integration tasks completed.
> Unified integrity gate — single invocation chains mechanical verification with AI-powered code review and produces a consolidated pass/fail report.
> Auto-generate always-current knowledge maps from graph topology. Never stale because it's computed, not authored.
> 4-phase knowledge extraction, reconciliation, drift detection, and remediation with convergence loop. Keeps the business knowledge graph current and identifies coverage gaps.
> Stress testing, capacity planning, and performance benchmarking with k6, Artillery, and Gatling. Detects existing load test infrastructure, designs test scenarios for critical paths, executes tests, and analyzes results against defined thresholds.
> Advise on ML pipeline management, experiment tracking hygiene, model serving patterns, and prompt evaluation frameworks. Audits reproducibility, model versioning, and deployment readiness across MLflow, Weights and Biases, SageMaker, and Vertex AI.
> Advise on mobile platform lifecycle management, permission handling, deep linking, push notifications, and app store submission compliance. Covers iOS, Android, React Native, and Flutter with platform-specific best practices.
> Test quality validation through mutation testing. Introduces deliberate code mutations to verify that the test suite catches real bugs, exposing weak assertions, missing edge cases, and dead test code.
> Structured logging, metrics, distributed tracing, and alerting strategy. The three pillars of observability, assessed and designed for production readiness.
> Navigate an existing harness-managed project and generate a structured orientation for new team members. Map the codebase, understand constraints, identify the adoption level, and produce a summary that gets someone productive fast.
> Dispatch independent tasks to concurrent agents, integrate results, and verify no conflicts. Only for truly independent problems.
> Red-Green-Refactor with performance assertions. Every feature gets a correctness test AND a benchmark. No optimization without measurement.
> Performance enforcement and benchmark management. Tier-based gates block commits and merges based on complexity, coupling, and runtime regression severity.
> Implementation planning with atomic tasks, goal-backward must-haves, and complete executable instructions. Every task fits in one context window.
> Lightweight pre-commit quality gate — mechanical checks first, AI review second. Fast feedback before code leaves your machine.
> Generate structured product specifications from feature requests, issues, or descriptions. Produces user stories with EARS acceptance criteria, Given-When-Then scenarios, and PRD documents with traceable requirements.
> Property-based and generative testing with fast-check, hypothesis, and automatic shrinking. Discovers edge cases that example-based tests miss by generating thousands of random inputs and verifying invariants hold for all of them.
> Single-page time-windowed product pulse. **Phase 3 ships the first-run interview only**: it converts vague intent into a concrete `pulse:` block in `harness.config.json`, refuses read-write DB credentials, and seeds from `STRATEGY.md` when present. The actual `harness pulse run` (Phases 2-4 of the runtime) is deferred to spec Phase 4.
> Safe refactoring with constraint verification at every step. Change structure without changing behavior, with harness checks as your safety net.
> Audit, fix, and track your project's path to a publishable release. No release without a passing report.
> Circuit breakers, rate limiting, bulkheads, retry patterns, and fault tolerance analysis. Detects missing resilience patterns, evaluates failure modes, and recommends concrete configurations for production-grade fault tolerance.
> AI-assisted selection of the next highest-impact unblocked roadmap item. Scores candidates, recommends one, assigns it, and transitions to the appropriate next skill.
> Create and manage a unified project roadmap from existing specs and plans. Interactive, human-confirmed, always valid.
> Natural language entry point to all harness skills. Classifies intent by scope/domain, confirms routing with reasoning, dispatches to the appropriate skill.
> Secret detection, credential hygiene, and vault integration. Find exposed secrets, classify risk, and enforce externalization before they reach production.
> Deep security audit combining mechanical scanning with AI-powered vulnerability analysis. OWASP baseline + stack-adaptive rules + optional threat modeling.
> Lightweight mechanical security scan. Fast triage, not deep review.
> Create and extend harness skills following the rich skill format. Define purpose, choose type, write skill.yaml and SKILL.md with all required sections, validate, and test.
> Deep soundness analysis of specs and plans. Auto-fixes inferrable issues, surfaces design decisions to you. Runs automatically before sign-off.
> Adversarial review of SQL queries for performance anti-patterns, missing indexes, N+1 queries, and unsafe operations. Analyzes raw SQL, ORM-generated queries, and migration scripts to produce optimization recommendations with estimated impact.
> Manage persistent state across agent sessions so that context, decisions, progress, and learnings survive context resets. Load state at session start, track position and decisions throughout, and save state for the next session.
> 6-factor dependency risk evaluation adapted from Trail of Bits security skill patterns. Surfaces dependency risk flags for human review — not automated verdicts.
> Red-green-refactor cycle integrated with harness validation. No production code exists without a failing test first.
> Graph-based test selection. Answers: "I changed these files — what tests should I run?"
> Test factories, fixtures, database seeding, and test data isolation. Establishes patterns for creating realistic, composable test data without coupling tests to specific database states.
> Audit microcopy, error messages, and UI strings for voice/tone consistency, clarity, and actionability. Produces a structured report with specific rewrites and a project voice guide when none exists.
> 3-level evidence-based verification. No completion claims without fresh evidence. "Should work" is not evidence.
> Binary pass/fail quick gate. Runs test, lint, typecheck — returns structured result. No judgment calls, no deep analysis. Pass or fail.
> Screenshot comparison, visual diff detection, and baseline management. Catches unintended CSS regressions, layout shifts, and rendering inconsistencies before they reach production.
> Scaffold a new harness-compliant project, migrate an existing project to the next adoption level, or bootstrap an existing project that just got the harness marketplace plugin installed (no `harness setup`). Assess current state, scaffold or migrate, configure, validate, instrument (baselines / telemetry / Tier-0 integrations), and finalize.
> Scaffold or migrate a test-suite project — API, E2E/UI, or shared test library. Owns test-suite-specific archetype selection, shared-library vs in-repo scaffolding decision, layer variants, tag taxonomy, reporter stack, and custom report. Cross-cutting concerns (adoption level, personas, AGENTS.md base template, i18n, knowledge graph, roadmap nudge, final commit) delegate to `initialize-harness-project`.
> Create families of related objects without specifying their concrete classes
> Convert the interface of a class into another interface that clients expect
> Decouple abstraction from implementation so both can vary independently
> Pass a request along a chain of handlers until one handles it
> Encapsulate operations as objects to support undo, queue, and logging
> Compose objects into tree structures and treat individual objects and composites uniformly
> Use constructor functions or ES6 classes to create and initialize objects
> Extend object behavior dynamically without modifying its source
> Load ES modules on demand with import() to reduce initial bundle size and enable code splitting
> Provide a simplified interface to a complex subsystem
> Create objects via a factory function without exposing instantiation logic to callers
> Share common state across many fine-grained objects to reduce memory usage
> Traverse a collection sequentially without exposing its internal structure
> Route component interactions through a central mediator to reduce direct coupling
> Add reusable behaviors to classes without deep inheritance chains
> Encapsulate private state and expose a public API using closures or ES modules
> Notify subscribers automatically when an observable object's state changes
> Share properties and methods across instances via the prototype chain
> Make shared data available to multiple consumers without prop-drilling
> Intercept and control object property access with ES6 Proxy
> Define all logic privately and selectively reveal only the public API
> Ensure a class has only one instance and provide a global access point
> Allow an object to alter its behavior when its internal state changes
> Use static import declarations to load ES modules at parse time for tree-shaking and static analysis
> Define a family of algorithms and make them interchangeable without altering the client
> Define the skeleton of an algorithm in a base class and let subclasses override specific steps
> Add new operations to object structures without modifying the objects
> Route, aggregate, and secure client requests through an API gateway or BFF pattern.
> Isolate failures with bulkheads using thread pools and semaphores to protect shared resources.
> Prevent cascading failures with circuit breaker, half-open state, and fallback logic.
> Centralize configuration, feature flags, and secrets management across services.
> Separate read and write models to optimize query and command performance independently.
> Design service boundaries using bounded contexts, DDD, and functional cohesion principles.
> Propagate trace context and emit spans across services using OpenTelemetry.
> Store state as an immutable sequence of events with projections, snapshots, and replay.
> Implement /health and /ready endpoints for liveness and readiness probes in containers.
> Guarantee at-least-once event delivery using a transactional outbox and polling publisher.
> Coordinate distributed transactions using choreography and orchestration sagas with compensation.
> Implement service registration and dynamic discovery with health checks in microservices.
> Inject cross-cutting concerns like observability and security via a sidecar proxy.
> Migrate monoliths incrementally using the strangler fig pattern with facade routing.
> Create fluid 60fps animations with React Native Reanimated using shared values, worklets, and layout animations
> Deploy React Native apps with EAS Build, EAS Submit, OTA updates, and automated CI/CD pipelines
> Set up and configure Expo projects with managed workflow, EAS Build, development builds, and config plugins
> Build performant scrollable lists with FlatList, SectionList, and FlashList for large data sets
> Implement touch gestures with React Native Gesture Handler for swipe, pan, pinch, and long press interactions
> Bridge native platform APIs into React Native with Expo Modules API and Turbo Modules
> Implement stack, tab, and drawer navigation in React Native with type-safe routing and deep linking
> Handle network requests, offline support, caching, and connectivity monitoring in React Native
> Optimize React Native app performance with profiling, memoization, lazy loading, and native thread management
> Implement push notifications with Expo Notifications, Firebase Cloud Messaging, and Apple Push Notification Service
> Persist data on mobile with AsyncStorage, SecureStore, MMKV, and SQLite for different use cases
> Test React Native apps with Jest, React Native Testing Library, and Detox for unit, integration, and E2E coverage
> Manage environment config with ConfigModule.forRoot, ConfigService, and Joi schema validation
> Define HTTP route handlers with @Controller, method decorators, params, and versioning
> Master NestJS DI container with tokens, useClass/useValue/useFactory providers
> Validate request payloads with class-validator, class-transformer, and DTO patterns
> Build event-driven systems with EventEmitter2, CQRS module, CommandBus, and QueryBus
> Handle errors globally with @Catch, ExceptionFilter, and custom exception hierarchies
> Build GraphQL APIs with GraphQLModule, @Resolver, @Query/@Mutation, @ObjectType, and DataLoader
> Protect routes with @UseGuards, CanActivate, JWT guards, and role-based access control
> Transform responses and add cross-cutting behavior with NestInterceptor and CallHandler
> Connect services with ClientsModule, @MessagePattern, @EventPattern, and TCP/Redis transport
> Apply NestMiddleware and functional middleware with consumer.forRoutes binding
> Organize NestJS applications with cohesive feature modules, controlled exports, and composable dynamic configurations
> Validate and transform request data with PipeTransform, ValidationPipe, and custom pipes
> Encapsulate business logic in @Injectable services with repository pattern separation
> Document APIs with @ApiProperty, @ApiOperation, @ApiTags, and DocumentBuilder
> Test NestJS apps with Test.createTestingModule, jest mocks, supertest e2e, and overrideProvider
> Structure Next.js 13+ applications using the App Router's file-system conventions for layouts, nested routes, and route segments
> Implement authentication with session handling, middleware guards, and Auth.js integration
> Control Next.js's four cache layers to balance freshness, performance, and cost
> Fetch server-side data efficiently without waterfalls using async Server Components
> Reduce bundle size, split code strategically, and optimize runtime performance for production
> Manage environment variables, runtime config, and server-only module boundaries safely
> Handle runtime errors and missing routes gracefully with error.tsx and not-found.tsx
> Serve optimized, correctly sized images automatically with next/image
> Define SEO metadata, Open Graph tags, and dynamic OG images with the Metadata API
> Run code at the edge before a request completes — redirect, rewrite, or modify responses
> Configure Next.js inside a monorepo with shared packages and Turborepo task orchestration
> Render multiple pages simultaneously in one layout and intercept routes for modal patterns
> Create HTTP endpoints in the App Router using route.ts with typed method exports
> Mutate server-side data directly from components using async functions — no API route required
> Keep data fetching and heavy logic on the server; push interactivity to the client only where needed
> Pre-render pages at build time or on a schedule using SSG, generateStaticParams, and ISR
> Stream server-rendered HTML progressively using Suspense boundaries and loading.tsx
> Test App Router components, Server Actions, and Route Handlers with Jest, Vitest, and MSW
> Handle binary data, encodings, and conversions with Node.js Buffer and TextEncoder
> Spawn and manage child processes with exec, spawn, fork, and IPC communication
> Implement hashing, HMAC, signing, encryption, and key derivation with Node.js crypto
> Manage environment configuration with process.env, dotenv, and validation for 12-factor apps
> Handle uncaught exceptions, promise rejections, and errors across async Node.js code
> Write Node.js ES modules correctly using import.meta.url, package.json type, and CJS interop
> Use Node.js EventEmitter for typed pub-sub communication with memory leak prevention
> Structure Express applications with middleware chains, routers, and proper error handling
> Build performant APIs with Fastify using schema validation, plugins, decorators, and hooks
> Build low-level HTTP servers with Node.js http module and middleware pattern
> Perform file system operations correctly using fs.promises, path utilities, and file watching
> Profile Node.js applications using --prof, clinic.js, memory snapshots, and event loop lag
> Process large data efficiently using Node.js Readable, Writable, and Transform streams
> Test Node.js APIs and modules using supertest, nock, and test containers
> Offload CPU-intensive work to worker threads using MessageChannel and shared buffers
> Use composables, components, and utilities in Nuxt without writing any import statements
> Fetch data and coordinate async state across server and client using Nuxt's built-in composables
> Target Node.js servers, edge runtimes, static hosting, or hybrid modes using Nitro presets and route rules
> Structure applications with file-based routing, named layouts, and per-page configuration via definePageMeta
> Protect routes, redirect unauthenticated users, and transform navigation using Nuxt's layered middleware system
> Extend Nuxt at build time with defineNuxtModule — add components, imports, plugins, and server routes programmatically
> Extend the Nuxt application instance, register global services, and provide values to composables using defineNuxtPlugin
> Set page titles, Open Graph tags, canonical URLs, and structured data with useSeoMeta and useHead
> Build fully-typed server API endpoints using Nitro's event-handler model and H3 utilities
> Share reactive state across components with SSR-safe useState and Pinia store hydration
> Test Nuxt components, composables, and pages using @nuxt/test-utils with full Nuxt context including auto-imports
> Propagate trace context across service boundaries using W3C TraceContext headers and baggage
> Add custom spans, attributes, and events to business-critical code paths beyond auto-instrumentation
> Track and correlate errors across services with span exceptions, status codes, and error events
> Configure OTLP exporters to send traces, metrics, and logs to observability backends and collectors
> Correlate structured logs with distributed traces using OpenTelemetry context for unified observability
> Record application metrics with OpenTelemetry counters, histograms, and gauges for monitoring and alerting
> Integrate OpenTelemetry with NestJS using interceptors, decorators, and module-based configuration
> Add OpenTelemetry tracing to Next.js with the instrumentation hook for Server Components, API routes, and middleware
> Identify performance bottlenecks using trace analysis, histogram metrics, and span timing patterns
> Control trace volume and costs with head sampling, tail sampling, and priority-based strategies
> Initialize the OpenTelemetry Node.js SDK with resource attributes, exporters, and auto-instrumentation
> Instrument distributed traces with OpenTelemetry spans to visualize request flow across services
> Implement authentication that resists credential stuffing, session hijacking, and token theft
> Apply cryptographic best practices for password hashing, data encryption, digital signing, and key management
> Prevent cross-site request forgery by validating request origin and requiring unpredictable tokens for state-changing operations
> Manage third-party dependency risks with auditing, lockfiles, automated scanning, and supply chain hardening
> Secure file upload endpoints against malicious files, path traversal, resource exhaustion, and code execution
> Enforce object-level authorization so users can only access resources they own or are permitted to access
> Eliminate SQL, NoSQL, and command injection by never concatenating untrusted input into queries or commands
> Implement security logging and monitoring to detect attacks, support incident response, and maintain audit trails
> Protect APIs with rate limiting, throttling, and abuse prevention to mitigate brute force, scraping, and denial of service
> Keep credentials out of code, logs, and version control by using environment variables, secrets managers, and strict access controls
> Configure HTTP security headers to protect against XSS, clickjacking, MIME sniffing, and information leakage
> Block script injection by encoding output, sanitizing HTML, and enforcing Content Security Policy
> Master HTTP browser caching — Cache-Control directives, ETag and Last-Modified validation, immutable assets with content-hashed filenames, stale-while-revalidate patterns, and cache partitioning in modern browsers for optimal repeat-visit performance.
> Master bundle analysis techniques — visualization tools for chunk composition, size budget enforcement in CI, dependency cost evaluation, source map exploration, and continuous size tracking to prevent bundle bloat.
> Solve the hardest problem in computer science — cache invalidation strategies including TTL-based expiry, event-driven invalidation, versioned cache keys, cache stampede prevention with locking and probabilistic early expiration, and fan-out invalidation for denormalized data.
> Master CDN-specific caching mechanics — cache key composition, Vary header impact, Surrogate-Control directives, tag-based instant purging, edge TTL management with s-maxage, and edge-side includes for granular partial caching.
> Design and optimize Content Delivery Network architecture — tiered caching, origin shielding, edge compute patterns, cache hit ratio optimization, multi-CDN strategies, and geographic routing for globally distributed applications.
> Master client-side rendering performance — SPA rendering optimization, reducing unnecessary re-renders, skeleton screen patterns, progressive rendering strategies, virtual DOM efficiency, React performance profiling, and concurrent rendering features for responsive user interfaces.
> Master code splitting strategies — route-based splitting, component-based splitting, vendor chunk optimization, and dynamic imports to reduce initial bundle size and improve Time to Interactive across single-page applications and server-rendered frameworks.
> Master content compression for web delivery — Brotli, gzip, and Zstandard algorithms, compression level selection, content-encoding negotiation, static pre-compression vs dynamic compression, and format-specific optimization strategies.
> Understand and minimize the cumulative cost of DNS resolution, TCP handshake, and TLS negotiation — the invisible overhead that adds 100-500ms to every new connection before a single byte of application data transfers.
> Master database connection pooling — pool sizing formulas, connection lifecycle overhead, PgBouncer transaction-mode pooling, serverless connection management, pool monitoring and diagnostics, and configuration for PostgreSQL, MySQL, and managed database services.
> Understand and optimize the browser's 5-stage pixel pipeline — Parse HTML to DOM, Parse CSS to CSSOM, Render Tree construction, Layout, Paint and Composite — to minimize time-to-first-paint and eliminate render-blocking bottlenecks.
> Measure and prevent unexpected layout shifts — elements visually moving after being rendered — by reserving space for dynamic content, handling font loading, setting explicit dimensions, and understanding the CLS scoring formula.
> Understand the HTML5 parsing algorithm — tokenization, tree construction, speculative parsing, and the preload scanner — to minimize parser-blocking delays and accelerate DOM construction.
> Master edge rendering — deploying server-side rendering to edge locations for minimal latency, understanding edge runtime constraints, regional deployment strategies, edge middleware patterns, data locality considerations, and platform-specific optimization for Cloudflare Workers, Vercel Edge, and Deno Deploy.
> Understand the browser and Node.js event loop processing model — task queues, microtask queue, rendering steps, and task prioritization — to write code that cooperates with the rendering pipeline instead of blocking it.
> Master web font loading optimization — font-display strategies for FOIT/FOUT control, Unicode range subsetting, variable fonts for multi-weight reduction, WOFF2 compression, preloading critical fonts, self-hosting versus CDN delivery, and the Font Loading API for programmatic control.
> Understand V8's generational garbage collector — young generation Scavenge, old generation Mark-Sweep-Compact, incremental and concurrent marking — to minimize GC pauses and reduce allocation pressure in performance-critical code.
> Master heap snapshot analysis — Summary, Comparison, Containment, and Dominator views — to precisely identify what objects consume memory, why they are retained, and how to reclaim leaked memory using the 3-snapshot technique and allocation tracking.
> Master HTTP/2's binary framing layer, stream multiplexing over a single TCP connection, stream priorities, HPACK header compression, server push mechanics, and head-of-line blocking mitigation to eliminate redundant connections and accelerate page delivery.
> Understand HTTP/3's QUIC transport layer — 0-RTT connection establishment, UDP-based stream multiplexing without head-of-line blocking, connection migration across network changes, and built-in TLS 1.3 encryption for faster, more resilient web delivery.
> Master modern image format selection — WebP, AVIF, and JPEG XL encoding characteristics, quality-to-size trade-offs, automated conversion pipelines, content negotiation with the picture element, and format-specific optimization for photographic, illustrative, and UI image types.
> Master database indexing — B-tree index mechanics and column ordering, composite indexes for multi-column queries, partial indexes for filtered subsets, covering indexes for index-only scans, GIN indexes for JSONB and full-text search, GiST for geometric and range data, and index maintenance strategies.
> Measure and optimize INP — the worst-case interaction latency across the entire page session — by decomposing each interaction into input delay, processing time, and presentation delay, then targeting each phase with yielding, scheduling, and rendering strategies.
> Measure and optimize LCP — the time until the largest visible content element renders — by decomposing it into 4 sub-parts (TTFB, resource load delay, resource load time, element render delay) and targeting each with specific strategies.
> Understand what triggers layout computation, how forced synchronous layouts and layout thrashing destroy frame budgets, and how to use containment and batching strategies to keep layout under 4ms per frame.
> Master media lazy loading strategies — native browser lazy loading for images and iframes, video poster optimization, low-quality image placeholders (LQIP), BlurHash encoding, progressive rendering techniques, and prioritization of above-the-fold media.
> Master lazy loading strategies — Intersection Observer-based visibility triggers, route-based lazy loading, component-level deferral, progressive hydration, and virtual scrolling to minimize initial payload and prioritize above-the-fold content.
> Detect, break up, and eliminate long tasks (>50ms on the main thread) using time-slicing, scheduler APIs, Web Workers, and cooperative yielding to keep the UI responsive and meet the 50ms responsiveness budget.
> Identify, diagnose, and prevent the 5 classic memory leak patterns in JavaScript — detached DOM trees, forgotten event listeners, closures over large scopes, forgotten timers, and global variable accumulation — using WeakRef, WeakMap, and systematic heap analysis.
> Master module federation for micro-frontend architectures — runtime module sharing across independently deployed applications, shared dependency negotiation, version compatibility strategies, and performance optimization for federated systems.
> Master N+1 query elimination — detecting N+1 patterns in ORMs and GraphQL resolvers, eager loading strategies, DataLoader batching, query count monitoring, and ORM-specific solutions for Prisma, Drizzle, Sequelize, and TypeORM.
> Understand the browser's paint and compositing pipeline — how content is rasterized into layers, which properties trigger expensive repaints, how GPU compositing enables 60fps animations, and how to manage layer promotion without exhausting GPU memory.
> Master the browser Performance API — PerformanceObserver, Navigation Timing, Resource Timing, User Timing, Server Timing, and Element Timing — to build custom performance measurement, monitoring, and alerting into any web application.
> Apply a systematic, measurement-first profiling workflow — define metric, establish baseline, identify bottleneck, implement fix, verify improvement with statistical significance — to avoid wasted optimization effort and ensure every change demonstrably improves performance.
> Master database query optimization — EXPLAIN and EXPLAIN ANALYZE for query plan analysis, identifying sequential scans and inefficient joins, rewriting queries for index utilization, understanding the query optimizer's cost model, and diagnosing slow queries in PostgreSQL and MySQL.
> Use preload, prefetch, preconnect, dns-prefetch, modulepreload, fetchpriority, and 103 Early Hints to inform the browser about resources needed soon — eliminating discovery latency and accelerating critical resource delivery.
> Master responsive image delivery — srcset for resolution switching, sizes for viewport-aware selection, the picture element for art direction, automated responsive image generation, and strategies to serve the smallest image that looks sharp on every device.
> Design and implement server-side caching strategies — cache-aside, read-through, write-through, and write-behind patterns with Redis and Memcached, multi-tier caching architectures, serialization optimization, and distributed cache consistency.
> Master server-side rendering performance — SSR versus CSR trade-off analysis, hydration cost and mitigation, streaming SSR with React 18, selective hydration for interactive islands, React Server Components, and SSR caching strategies for optimal TTFB and TTI.
> Master Service Worker caching — lifecycle management (install, activate, fetch), caching strategies (cache-first, network-first, stale-while-revalidate), offline support, precaching critical assets, runtime caching with Workbox, background sync for offline writes, and cache versioning for safe updates.
> Master static site generation — build-time rendering for instant page loads, incremental static regeneration for dynamic content, on-demand revalidation, hybrid rendering strategies mixing static and dynamic pages, and framework-specific patterns for Next.js, Astro, and Gatsby.
> Master streaming rendering — React Suspense-based streaming SSR, chunked transfer encoding, out-of-order HTML delivery, shell-first rendering, progressive page assembly, and error handling for streamed content to achieve the fastest possible Time to First Byte and progressive content delivery.
> Understand CSS selector matching, style invalidation, and recalculation costs — how browsers resolve computed styles for every visible element, why some selectors are orders of magnitude more expensive than others, and how to keep style recalculation under 4ms per frame.
> Master SVG optimization — automated minification with SVGO, inline versus external delivery trade-offs, SVG sprite sheet systems, accessibility patterns, rendering performance for complex vector graphics, and icon system architecture.
> Master tree shaking and dead code elimination — ESM static analysis requirements, sideEffects configuration, barrel file pitfalls, library authoring for tree-shakability, and debugging why unused code survives bundling.
> Master Web Workers for off-main-thread computation — dedicated workers for CPU-intensive tasks, Comlink for ergonomic worker communication, SharedArrayBuffer for zero-copy data sharing, worker pooling for throughput, and integration patterns with React and bundlers.
> Query data with Prisma Client findUnique/findMany, create/update/delete, upsert, select, include
> Filter and sort Prisma queries with where, AND/OR/NOT, orderBy, and cursor/offset pagination
> Manage database schema evolution with prisma migrate dev/deploy/reset and migration history
> Optimize Prisma queries with select, findUnique index hits, batching, and avoiding N+1
> Execute type-safe raw SQL with $queryRaw, $executeRaw, and Prisma.sql template tag
> Model one-to-one, one-to-many, many-to-many, and self-relations with @relation in Prisma
> Design Prisma schemas with datasource, generator, models, field types, and field attributes
> Seed databases idempotently with prisma/seed.ts, --seed flag, and environment branching
> Implement soft deletes in Prisma with middleware or $extends query extensions and deletedAt pattern
> Execute atomic operations with Prisma $transaction, interactive transactions, and nested writes
> Use generated Prisma types like XxxCreateInput, XxxWhereInput, $Enums, and validator utilities
> Modern React patterns for 2025-2026 including React 19, Compiler, and AI-integrated UI
> Render React entirely in the browser for highly interactive single-page applications
> Build multi-part components that share state implicitly via context
> Build responsive UIs using React 18 concurrent features and transitions
> Separate data-fetching containers from stateless presentational components
> Share state across the component tree without prop drilling using React Context
> Load modules on demand to reduce initial bundle size and improve startup performance
> Extend component behavior by wrapping in a higher-order component
> Reuse stateful logic across components via custom hooks
> Hydrate only interactive UI islands, leaving static content as HTML
> Prevent expensive re-renders and recomputations with React memoization APIs
> Delay hydration of below-fold or non-critical components to improve TTI
> Make data available to any component in the tree without prop drilling
> Share stateful logic by passing a render function as a prop
> Run components on the server to eliminate client JavaScript and enable direct data access
> Pre-render React components on the server for improved SEO and initial load performance
> Choose the right state management approach for your React application scale
> Bundle all dependencies at build time for predictable loading performance
> Declaratively handle async loading states with React Suspense boundaries
> Normalize entity collections with createEntityAdapter for O(1) lookups and pre-built CRUD reducers
> React to dispatched actions and state changes with createListenerMiddleware for structured side effects
> Persist and rehydrate Redux state across browser sessions with redux-persist or manual localStorage strategies
> Apply optimistic and pessimistic cache updates with onQueryStarted for instant UI feedback with automatic rollback
> Define query and mutation endpoints with cache tag invalidation, response transformation, and auto-generated hooks
> Configure RTK Query with createApi and fetchBaseQuery for automatic caching, deduplication, and loading state management
> Derive and memoize computed state with createSelector to avoid redundant calculations and unnecessary re-renders
> Organize Redux state into self-contained slices using createSlice for co-located reducers, actions, and selectors
> Configure the Redux store with configureStore, typed hooks, middleware, and Provider wiring
> Test Redux slices, thunks, selectors, and connected components with focused, maintainable test strategies
> Handle async operations with createAsyncThunk for structured pending/fulfilled/rejected lifecycle management
> Type Redux state, actions, thunks, and hooks with full inference and minimal manual annotation
> Isolate failures by partitioning resources so one failing component cannot exhaust capacity for others
> Validate resilience by injecting controlled failures to verify that fallbacks, retries, and circuit breakers work under real conditions
> Protect services from cascading failures by stopping requests to unhealthy dependencies until they recover
> Handle permanently failing messages with dead letter queues for safe inspection, alerting, and reprocessing
> Provide degraded but functional responses when primary operations fail, ensuring users always get a result
> Implement health check endpoints for service readiness, liveness, and dependency monitoring
> Ensure safe retries by making operations produce the same result regardless of how many times they execute
> Control request throughput with token bucket, sliding window, and fixed window algorithms to protect services from overload
> Handle transient failures with configurable retry strategies, exponential backoff, and jitter
> Prevent resource exhaustion and hung requests with timeouts, AbortController, and deadline propagation
> Evaluate access decisions using attributes of the subject, resource, action, and environment -- eliminating role explosion by expressing authorization as policy rules over contextual data
> Public-key cryptography for key exchange, digital signatures, and identity verification
> Model multi-step adversary strategies as goal-oriented tree decompositions -- revealing which attack paths are cheapest and which defenses yield the highest leverage
> Log the who, what, when, where, and outcome of every security-relevant event in a
> Login, registration, password reset, magic links, and SSO -- each flow has distinct attack surfaces and each must be hardened independently
> Replace ambient authority ("who are you?") with explicit capabilities ("what token do you hold?") -- eliminating confused deputy attacks by making every permission a transferable, revocable, unforgeable object
> X.509 certificates are the backbone of internet trust -- manage them correctly or accept
> Run SAST, DAST, SCA, and secrets scanning on every commit -- automated security gates that
> Sign every artifact you produce and verify every artifact you consume -- because an unsigned
> Regulatory frameworks mandate specific logging requirements -- SOC2, GDPR, HIPAA, and
> Argon2id for new systems, bcrypt for broad compatibility -- always salt, consider peppering, tune cost parameters to hardware, and plan hash upgrade paths
> Every session token, encryption key, nonce, and CSRF token depends on unpredictable randomness -- use a CSPRNG or accept that attackers will predict your secrets
> Your application is 90% third-party code -- scan it for known vulnerabilities, lock it
> Deserialization reconstructs objects from byte streams -- and in most languages, that
> Environment variables are visible in process listings, inherited by child processes,
> Digital forensics is the discipline of collecting, preserving, and analyzing evidence from
> One-way functions for integrity verification, content addressing, and commitment schemes
> HMAC proves a message was created by someone with the shared secret; digital signatures prove it was created by a specific private key holder -- choose based on whether you need symmetric verification or non-repudiation
> Tell the browser "never connect to this domain over HTTP, ever" -- and make it permanent
> Authentication at login is necessary but insufficient -- continuously evaluate identity
> The first 60 minutes of a security incident determine whether the organization loses days
> Every injection vulnerability has the same root cause: untrusted data is interpreted as
> Correlate events across multiple log sources to detect attacks that are invisible in any
> Memory corruption vulnerabilities account for 70% of critical CVEs in C/C++ codebases --
> Something you know, something you have, something you are -- combining authentication factors so that compromising one factor alone is insufficient to gain access
> Isolate every workload behind its own perimeter -- so compromising the web server does
> Both sides prove their identity with certificates -- the server authenticates to the
> Hire skilled attackers to find the vulnerabilities your automated tools and internal reviews
> Organizations that conduct blameless post-incident reviews after every significant security
> When security depends on the order of operations but the system does not enforce that
> Assign permissions to roles, assign roles to users -- simple, auditable, and sufficient for most applications when combined with resource-level checks
> Model authorization as a graph of relationships -- "User X is an editor of Document Y which belongs to Folder Z owned by Team W" -- enabling inherited permissions that follow resource hierarchies
> Know exactly what is in your software (SBOM) and prove how it was built (provenance) -- because
> Secrets are born (generated), distributed (delivered to consumers), rotated (replaced on
> Scale security knowledge across the engineering organization by embedding trained security
> Session tokens are bearer credentials -- generate with CSPRNG, bind to client context, enforce idle and absolute timeouts, and regenerate on privilege changes
> Find security flaws in the design document, not in the penetration test report -- because
> AES-256-GCM for most use cases, ChaCha20-Poly1305 when hardware AES is unavailable --
> End-to-end threat modeling from system decomposition through threat enumeration, risk rating, and mitigation tracking -- the operational backbone of proactive security design
> Systematic threat identification using the six STRIDE categories -- Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege
> TLS 1.3 with ECDHE key exchange, AES-256-GCM or ChaCha20-Poly1305 ciphers, and valid
> Every security control exists because data crosses from a trusted zone to a less-trusted one -- identify the boundaries first, then concentrate defenses there
> Centralize secrets in a vault, issue dynamic short-lived credentials, encrypt data
> A vulnerability without a disclosure process is a vulnerability that gets sold to exploit
> No implicit trust based on network position, VPN status, or previous authentication -- every request is authenticated, authorized, and encrypted regardless of origin
> Manage shared state with React Context and useReducer for prop-drilling avoidance and scoped state
> Build bottom-up atomic state with Jotai for granular, composable React state management
> Select and derive state efficiently to minimize component re-renders across any state management library
> Separate server state from client state and synchronize them with TanStack Query and local stores
> Debug Zustand stores with Redux DevTools integration for time-travel debugging and action inspection
> Write mutable-style state updates in Zustand stores with the Immer middleware for cleaner nested mutations
> Persist Zustand store to localStorage or custom storage with automatic rehydration and migration support
> Optimize Zustand re-renders with selectors, shallow comparison, useShallow, and transient subscriptions
> Split large Zustand stores into composable slice functions for modular, maintainable state management
> Create lightweight global stores with Zustand's create function for minimal-boilerplate state management
> Deploy SvelteKit to any platform by selecting and configuring the correct adapter in svelte.config.js
> Build flexible components in Svelte 5 using snippets, {@render}, typed children props, and named content areas
> Handle 404s, auth failures, and unexpected crashes in SvelteKit with +error.svelte, the error() helper, and handleError hooks
> Process HTML form submissions server-side using SvelteKit actions with progressive enhancement via use:enhance
> Fetch route data before rendering using SvelteKit's load functions — server-only, universal, streaming, and invalidation patterns
> Minimize bundle size, reduce perceived latency, and handle large datasets efficiently in SvelteKit applications
> Build SvelteKit routes using the file-system convention with +page.svelte, +layout.svelte, route groups, and dynamic segments
> Declare reactive state, derived values, and side effects in Svelte 5 using the runes API ($state, $derived, $effect, $props, $bindable)
> Intercept every request, populate locals, modify responses, and handle errors using SvelteKit's hooks.server.ts
> Choose the right state scope in SvelteKit: component-local runes, context API for subtree isolation, and module-level state for true singletons
> Share reactive state across any component tree using writable, readable, and derived stores with the Svelte store contract
> Test Svelte 5 components with Vitest and @testing-library/svelte — render, user events, store mocking, and async tick flushing
> Add enter/exit animations, list reordering motion, and spring physics to Svelte elements using built-in and custom transitions
> Directly read, write, and remove cache entries with QueryClient methods
> Chain queries that depend on each other's results using the enabled flag and useQueries
> Inspect cache state, query status, and network activity using the React Query DevTools panel
> Implement cursor-based pagination and "load more" UX with useInfiniteQuery
> Execute server-side mutations with useMutation, lifecycle callbacks, and retry configuration
> Update the UI immediately on mutation and roll back automatically if the server request fails
> Hydrate the client cache from server-fetched data using dehydrate/hydrate and HydrationBoundary
> Control cache freshness with invalidateQueries, staleTime, gcTime, and refetch strategies
> Structure query keys as type-safe factories to enable precise cache invalidation and scoped refetching
> Use useSuspenseQuery to integrate React's Suspense and error boundaries with TanStack Query
> Automate WCAG accessibility checks using axe-core with Playwright and jest-axe
> Test React components with Testing Library using user-centric queries and async utilities
> Test Svelte components with Testing Library using render, fireEvent, and waitFor
> Verify service compatibility using Pact consumer-provider contract tests
> Configure and interpret test coverage thresholds for meaningful quality signals
> Choose the right test layer (unit/integration/E2E) and prevent flaky tests in CI
> Build maintainable test data using factory functions, builders, and faker.js
> Write integration tests that exercise real dependencies using test databases and containers
> Mock modules, functions, and timers in Vitest and Jest to isolate units under test
> Intercept HTTP requests in tests using Mock Service Worker handlers at the network level
> Measure and assert on code performance using vitest bench and timing budgets
> Write maintainable Playwright tests using page objects, fixtures, and parallel execution
> Configure Playwright test runner with fixtures, reporters, and browser contexts
> Generate exhaustive test cases automatically using fast-check property-based testing
> Use snapshot testing selectively for stable outputs, knowing when to avoid it
> Drive design through tests using red-green-refactor cycle and test-first discipline
> Write focused, isolated unit tests using AAA pattern with describe/it/expect
> Configure Vitest with workspaces, environments, coverage, and TypeScript integration
> Inject database clients, sessions, and request data into every procedure via `createTRPCContext`
> Throw typed TRPCErrors in procedures and format them consistently for client consumption
> Define type-safe inputs and outputs with Zod schemas for end-to-end type inference
> Add cross-cutting logic (auth checks, logging, rate limiting) to procedures via t.middleware
> Integrate tRPC with Next.js App Router using the fetch adapter, server-side callers, and React Server Components
> End-to-end type-safe data fetching with `api.xxx.useQuery`, `useMutation`, and cache invalidation via TanStack Query
> Organize type-safe RPC procedures into nested routers that merge into a single appRouter
> Stream real-time events to clients over WebSocket using tRPC subscriptions and observables
> Type async/await, Promise chains, and concurrent patterns correctly in TypeScript
> Prevent mixing semantically distinct primitives using branded opaque types
> Use abstract classes, private fields, access modifiers, and implements vs extends correctly
> Use conditional types, infer, and distributive logic to derive types programmatically
> Configure tsconfig with extends, project references, composite builds, and incremental compilation
> Extend existing types, modules, and namespaces via declaration merging and augmentation
> Implement class, method, and property decorators with reflect-metadata in TypeScript
> Model mutually exclusive states with discriminated unions and exhaustive narrowing
> Model and type errors explicitly using Result types, discriminated unions, and typed throws
> Write reusable, type-safe functions and interfaces using TypeScript generics
> Transform object types by iterating over their keys with mapped type syntax
> Organize TypeScript code with ES modules, barrel exports, path aliases, and declaration files
> Reduce TypeScript compilation time and type complexity with targeted optimizations
> Validate objects against a type without widening using the satisfies keyword
> Enable and satisfy strict TypeScript checks including strictNullChecks and exactOptionalPropertyTypes
> Construct precise string types using template literal syntax and string manipulation types
> Test TypeScript types at compile time using expect-type, tsd, and vitest type matchers
> Narrow union types safely using type guards, assertion functions, and control flow
> Apply built-in TypeScript utility types to transform and compose types without redundancy
> Use Zod schemas as the single source of truth for runtime validation and TypeScript types
> Active voice in UI writing — active vs passive voice, when passive is acceptable, verb-first patterns for buttons and actions
> Button and CTA copy — verb-noun pattern, specificity over vagueness, context-sensitive labels, and writing buttons that tell users exactly what will happen
> Confirmation dialogs — destructive action writing, consequence clarity, and specific button labels that make irreversibility unmistakable
> Content hierarchy in UI — heading structure, progressive disclosure in text, inverted pyramid for interface writing
> Data table copy — column headers, empty cells, truncation patterns, filter and sort labels, bulk action copy
> Destructive action copy — irreversibility warnings, undo availability, double-confirmation patterns, cooldown messaging
> Empty states — first-use, user-cleared, and no-results patterns that motivate action, set expectations, and turn blank screens into onramps
> Error messages — what went wrong, why it matters, how to fix it, the three-part error pattern for clear, actionable error communication
> Error severity communication — calibrating error tone to severity, from field validation to system failure to data loss
> Form labels and helper text — label clarity, placeholder anti-patterns, required-field indication, and writing forms that users complete without confusion
> Inclusive language in UI — gender-neutral, ability-neutral, culture-aware writing, avoiding idioms that exclude
> Writing for internationalization — source strings that survive translation, concatenation traps, pluralization, date and number references
> Loading state copy — progress transparency, expectation setting, and writing text that reduces perceived wait time and prevents users from abandoning
> Microcopy principles — clarity, brevity, human voice, active voice, and the core rules all UI text follows
> Navigation label writing — menu item naming, breadcrumb clarity, tab labels, and sidebar organization that users scan without reading
> Notification and alert copy — urgency calibration, actionability, toast vs banner vs modal selection, and writing messages that inform without overwhelming
> Onboarding copy — progressive disclosure, value-first framing, reducing anxiety, and welcome flows that convert sign-ups into active users
> Permission and access copy — role-based messaging, upgrade prompts, gating copy, "you don't have access" patterns
> Plain language for UI — reading level targeting, jargon elimination, sentence structure for scanning
> Search copy — placeholder text, zero-results messaging, autocomplete hints, search scope indicators, saved search patterns
> Settings and preferences copy — toggle descriptions, preference explanations, consequence previews, settings organization
> Success feedback copy — confirmation messages, celebration calibration, and next-step prompts that close the action loop and guide users forward
> Tooltip and contextual help writing — when to use tooltips, what to put in them, and progressive disclosure patterns that educate without interrupting
> Voice and tone in UI writing — defining voice (constant) vs tone (contextual), formality calibration, and emotional register
> Writing for scanning — F-pattern, front-loading keywords, chunking, bullet vs prose decisions for UI text
> Validate AGENTS.md quality and evolve it as the codebase changes. Good context engineering means AI agents always have accurate, current knowledge about the project.
> Load Vue components lazily to reduce initial bundle size using defineAsyncComponent
> Communicate from child to parent components using emits and defineEmits
> Extract and reuse stateful logic across components using Vue composables
> Create custom Vue directives for low-level DOM manipulation and reusable DOM behavior
> Manage shared application state with Pinia stores in the Options or Setup style
> Share data across a component tree without prop-drilling using provide/inject
> Create and manage reactive primitive values and objects using ref and reactive
> Extract behavior into components that render nothing, delegating all rendering to the consumer via slots
> Use named, scoped, and dynamic slots to build flexible, composable component APIs
> Render a component's HTML at a different location in the DOM using Vue's Teleport
> React to data changes with watch and watchEffect for side effects and async operations
> Spawn and manage child actors for independent, concurrent state machines that communicate via message passing
> Control transition eligibility with guards and execute side effects with entry, exit, and transition actions
> Remember and restore previous state configurations with shallow and deep history pseudo-states
> Invoke promises, callbacks, observables, and child machines as services tied to state node lifecycles
> Define statecharts with createMachine for explicit states, transitions, context, and events
> Model concurrent, independent state regions that are active simultaneously within a single machine
> Connect XState machines to React components with useMachine, useActor, and useSelector hooks
> Test XState machines with direct state transition assertions and model-based testing for path coverage
> Generate full type safety for XState machines with typegen (v4) and the setup pattern (v5)
> Visualize and inspect XState machines at design time and runtime with Stately Studio, Inspector, and VS Code extension
> Validate arrays, tuples, records, maps, and sets with Zod's collection primitives
> Run async Zod validation with parseAsync, safeParseAsync, async refinements, and external checks
> Handle Zod validation failures with safeParse, ZodError, error.format, error.flatten, and custom error maps
> Derive TypeScript types from Zod schemas with z.infer, input vs output types, and ZodTypeAny
> Validate Next.js server actions, API routes, and form data with Zod schemas
> Shape and compose Zod objects with pick, omit, partial, required, extend, merge, strict, and passthrough
> Define runtime-validated TypeScript schemas with z.object, primitives, enums, literals, and schema composition
> Validate and transform strings with Zod's min, max, email, url, regex, trim, and custom error messages
> Transform and validate data with Zod's transform, refine, superRefine, and preprocess APIs
> Model variant types with z.union, z.discriminatedUnion, z.intersection, and type narrowing
The most comprehensive Claude Code plugin — 48 agents, 182 skills, 68 legacy command shims, selective install profiles, and production-ready hooks for TDD, security scanning, code review, and continuous learning
HelloAGENTS — The orchestration kernel that makes any AI CLI smarter. Adds intelligent routing, quality verification (Ralph Loop), safety guards, and notifications.
AI-supervised issue tracker for coding workflows. Manage tasks, discover work, and maintain context with simple CLI commands.
Unity Development Toolkit - Expert agents for scripting/refactoring/optimization, script templates, and Agent Skills for Unity C# development
v9.36.1 — Codex-compatible portable skills with Claude-safe provider routing and session ID support. Run /octo:setup.
Context-Driven Development plugin that transforms Claude Code into a project management tool with structured workflow: Context → Spec & Plan → Implement
Executes bash commands
Hook triggers when Bash tool is used
Modifies files
Hook triggers on file write and edit operations
Uses power tools
Share bugs, ideas, or general feedback.
Uses Bash, Write, or Edit tools
Uses Bash, Write, or Edit tools
Share bugs, ideas, or general feedback.
Own this plugin?
Verify ownership to unlock analytics, metadata editing, and a verified badge.
Sign in to claimOwn this plugin?
Verify ownership to unlock analytics, metadata editing, and a verified badge.
Sign in to claim