By ririnto
Agent-first repository design with progressive disclosure, architecture enforcement, and entropy management.
npx claudepluginhub ririnto/sinon --plugin harness-engineeringUse this agent when a repository needs mechanical architecture enforcement, layer-dependency auditing, structural-test validation, or taste-invariant checks. Examples: <example> Context: A change may have crossed forbidden layer boundaries user: "Check whether any imports violate the layer model after this refactor" assistant: "I'll use the architecture-guard agent to run a mechanical architecture audit and report concrete violations." <commentary> Dependency-direction enforcement is a core trigger for this agent. </commentary> </example> <example> Context: Structural enforcement needs verification in CI or local review user: "Run the structural tests and show which domains are non-compliant" assistant: "I'll use the architecture-guard agent to execute the structural checks and summarize the failing rules with file evidence." <commentary> Structural-test validation is part of the ordinary path. </commentary> </example> <example> Context: Golden-principle drift needs a deterministic audit user: "Scan for unstructured logging, naming drift, and oversized files" assistant: "I'll use the architecture-guard agent to check the mechanical rules and report each violation with remediation guidance." <commentary> Taste invariants belong here when they are enforced as rules rather than subjective review notes. </commentary> </example>
Use this agent when a pull request or local change set needs a deterministic, evidence-backed review against the harness-engineering layer model, golden principles, and taste invariants, and when an agent-to-agent review loop needs a dissenting reviewer that must be satisfied before merge. Examples: <example> Context: A pull request needs agent review before merge user: "Review PR #214 against the repository's layer model and golden principles" assistant: "I'll use the code-reviewer agent to audit the diff and return a structured review with required changes, optional suggestions, and a merge verdict." <commentary> Pre-merge review against mechanical rules is the primary trigger. </commentary> </example> <example> Context: An agent-to-agent review loop needs iteration user: "Iterate with the code-reviewer until it has no blocking comments left" assistant: "I'll use the code-reviewer agent to re-review after each update and report when all blocking comments are cleared." <commentary> The Ralph Wiggum Loop requires a reviewer whose satisfaction gates merge. </commentary> </example> <example> Context: A local change set needs review before opening a pull request user: "Review my working-tree change before I open the PR" assistant: "I'll use the code-reviewer agent to assess the local diff and flag anything that would block merge once the PR opens." <commentary> Pre-PR review shortens the iteration loop and reduces reviewer churn. </commentary> </example>
Use this agent when a repository needs report-only entropy cleanup analysis: documentation drift detection, stale cross-link checks, quality-grade review, or execution-plan freshness auditing. Examples: <example> Context: The repository needs a documentation health report user: "Run doc gardening on this repo and tell me what drifted" assistant: "I'll use the doc-gardener agent to scan for documentation entropy and return a structured report of the cleanup work needed." <commentary> This is the main report-only gardening workflow. </commentary> </example> <example> Context: Cross-links may have broken after a refactor user: "Check whether all links in CLAUDE.md still resolve" assistant: "I'll use the doc-gardener agent to validate the cross-links and report any broken or stale references." <commentary> Link validation is a standard entropy check for this role. </commentary> </example> <example> Context: Quality grades may no longer match actual code health user: "Review whether QUALITY_SCORE.md still matches the current repository" assistant: "I'll use the doc-gardener agent to compare the recorded grades with the codebase and report where updates are needed." <commentary> The agent audits quality grades but does not claim to edit them in its ordinary path. </commentary> </example>
Use this agent when a task needs autonomous end-to-end execution in an agent-first repository: reproduce a bug, implement a fix or feature, validate behavior through a running application, and back the result with observable evidence. Examples: <example> Context: A reported bug needs reproduction, repair, and proof user: "Reproduce bug #142, fix it, and show that the flow now works" assistant: "I'll use the e2e-driver agent to reproduce the failure in an isolated worktree, implement the fix, and validate the result with runtime evidence." <commentary> This is the core autonomous end-to-end workflow. </commentary> </example> <example> Context: A new feature needs runtime validation rather than code-only review user: "Implement the connector timeout feature and prove it works in the app" assistant: "I'll use the e2e-driver agent to deliver the feature and validate it against a running isolated instance." <commentary> The agent is the right fit when implementation must be validated through the actual application. </commentary> </example> <example> Context: A performance or reliability budget must be checked after a change user: "Ensure checkout startup stays under 800 ms after this refactor" assistant: "I'll use the e2e-driver agent to exercise the journey and verify the budget with observability data." <commentary> Observability-backed validation is a defining capability of this agent. </commentary> </example>
Use this agent when an execution plan needs authoring, lifecycle management, progress updates, or completion handling in `docs/exec-plans/`. Examples: <example> Context: Complex work needs a durable execution plan before implementation user: "Write an execution plan for the new connector module" assistant: "I'll use the spec-writer agent to create the execution plan in docs/exec-plans/active/." <commentary> Authoring a first-class execution plan is the primary trigger. </commentary> </example> <example> Context: Active work needs progress and decision tracking user: "Update the auth refactor plan with today's progress and the retry decision" assistant: "I'll use the spec-writer agent to append the progress entry and decision log without rewriting plan history." <commentary> Ongoing execution-plan lifecycle management is part of the ordinary path. </commentary> </example> <example> Context: Finished work needs proper completion handling user: "Mark the billing plan complete and move it out of active/" assistant: "I'll use the spec-writer agent to close the plan, move it to completed/, and record any technical-debt follow-up." <commentary> Completion and archival are durable responsibilities of this agent. </commentary> </example>
Upstash Context7 MCP server for up-to-date documentation lookup. Pull version-specific documentation and code examples directly from source repositories into your LLM context.
Uses power tools
Uses Bash, Write, or Edit tools
Share bugs, ideas, or general feedback.
Comprehensive C4 architecture documentation workflow with bottom-up code analysis, component synthesis, container mapping, and context diagram generation
AI-powered wiki generator for code repositories. Generates comprehensive, Mermaid-rich documentation with dark-mode VitePress sites, onboarding guides, deep research, and source citations. Inspired by OpenDeepWiki and deepwiki-open.
Claude + Obsidian knowledge companion. Sets up a persistent, compounding wiki vault. Covers memory management, session notetaking, knowledge organization, and agent context across projects. Based on Andrej Karpathy's LLM Wiki pattern. Optional DragonScale Memory extension adds hierarchical log folds, deterministic page addresses, embedding-based semantic tiling lint, and boundary-first autoresearch topic selection.
v9.33.0 — Routing config now feeds review/parallel/debate; develop dispatch, quota watcher cleanup, probe output-dir, and docs paths fixed. Run /octo:setup.
20 modular skills for idiomatic Go — each under 225 lines, backed by 48 reference files, 8 automation scripts (all with --json, --limit, --force), and 4 asset templates. Covers error handling, naming, testing, concurrency, interfaces, generics, documentation, logging, performance, and more. Activates automatically with progressive disclosure and conditional cross-references.