Lightweight spec-driven development. Learn, plan, build, check, ship.
npx claudepluginhub guygrigsby/claude-plugins --plugin snoExecute the plan in parallel waves. Independent tasks run as concurrent agents, dependent tasks wait.
Verify the work matches the spec. Run tests, review changes, check acceptance criteria.
Quick mode for small tasks. Skip the full ceremony — describe what you want, get it done.
Understand the problem. Launch parallel research agents (Opus) for domain analysis, data modeling, and codebase scouting. Then interview for gaps. Produce a spec in .sno/spec.md.
Start a new sno cycle. Pulls latest, creates a branch, and initializes .sno/ state.
Break the spec into concrete tasks with dependency tracking. Produce a plan in .sno/plan.md.
Commit the work, open a PR to main, and close out the cycle.
Guide me through the next step of spec-driven development. Routes to the current phase: learn → plan → build → check → ship.
Add, list, or manage parking lot items for later. Usage: /sno:todo [item to add]
Use this agent during sno:plan to identify antipatterns, gotchas, and common mistakes for the specific tech stack and domain. Spawned by the plan command to run in parallel with the planner. <example> Context: User runs the plan command after learning phase is complete user: "/sno:plan" assistant: "I'll spawn parallel plan agents including the antipattern detector." <commentary> The plan phase benefits from proactive identification of antipatterns and gotchas specific to the tech stack and domain so the plan avoids known pitfalls. </commentary> </example>
Use this agent during sno:learn to explore the existing codebase for patterns, conventions, dependencies, and relevant existing code. Spawned by the learn command to run in parallel with other research agents. <example> Context: User is starting a new sno learn cycle in an existing project user: "/sno:learn" assistant: "I'll spawn parallel research agents including the codebase scout to understand what exists." <commentary> The learn phase needs to understand the existing codebase before writing a spec. </commentary> </example> <example> Context: User wants to add a feature to an existing codebase user: "Add webhook support to the API" assistant: "Let me scout the codebase to understand the current API structure and patterns." <commentary> Adding to existing code requires understanding what's already there. </commentary> </example>
Use this agent during sno:plan AFTER the draft plan is assembled to perform a critical review — checking for gaps, inconsistencies, missed risks, and spec drift. Runs after all other plan agents complete. <example> Context: Draft plan has been assembled from planner and other agent outputs user: (internal — spawned by plan command after draft is ready) assistant: "Running critical review on the draft plan before presenting to the user." <commentary> The critical reviewer is the final gate before the plan is shown to the user. It catches what the individual agents missed. </commentary> </example>
Use this agent during sno:learn to analyze data structures, relationships, and normalization. Designs toward 5NF. Spawned by the learn command to run in parallel with other research agents. <example> Context: User is starting a new sno learn cycle user: "/sno:learn" assistant: "I'll spawn parallel research agents including the data modeler to analyze data structures and relationships." <commentary> The learn phase needs data modeling to understand storage requirements before writing a spec. </commentary> </example> <example> Context: User describes entities with relationships user: "Users can have multiple organizations and each org has projects" assistant: "Let me model the data relationships and normalize to 5NF." <commentary> Multi-entity relationships require proper normalization analysis. </commentary> </example>
Use this agent during sno:learn to research the problem domain, identify bounded contexts, aggregates, and ubiquitous language using Domain-Driven Design principles. Spawned by the learn command to run in parallel with other research agents. <example> Context: User is starting a new sno learn cycle for a feature user: "/sno:learn" assistant: "I'll spawn parallel research agents including the domain researcher to analyze the problem space." <commentary> The learn phase needs deep domain understanding before writing a spec. This agent handles the DDD analysis. </commentary> </example> <example> Context: User describes a new system to build user: "We need a billing system that handles subscriptions and usage-based pricing" assistant: "Let me research the billing domain — bounded contexts, aggregates, and entities." <commentary> Complex domain requires DDD analysis to identify boundaries and language before planning. </commentary> </example>
Use this agent during sno:plan to analyze a spec and produce a dependency-tracked task plan optimized for parallel execution. Spawned by the plan command. <example> Context: User runs the plan command after learning phase is complete user: "/sno:plan" assistant: "I'll spawn the planner agent to break the spec into parallelizable tasks." <commentary> The plan phase needs deep analysis of the spec, domain model, and codebase to produce well-scoped tasks with accurate dependencies. </commentary> </example> <example> Context: User wants to re-plan after spec changes user: "The spec changed, re-plan this" assistant: "I'll spawn the planner to re-analyze the spec and rebuild the task graph." <commentary> Spec changes invalidate the existing plan. The planner re-analyzes from scratch. </commentary> </example>
Use this agent during sno:check to perform a full code review of the diff against the base branch. Reviews code quality, security, performance, consistency, and maintainability — the same things a senior engineer would check in a PR review. <example> Context: Build phase is complete, check phase is reviewing the work user: (internal — spawned by check command) assistant: "Running PR review on the diff against main." <commentary> The PR reviewer looks at the actual code changes, not just whether acceptance criteria are met. It catches issues that criterion-based checking misses: style drift, security holes, performance regressions, unclear naming, missing error handling at boundaries. </commentary> </example>
Use this agent during sno:learn to research how similar problems are solved in practice — prior art, industry patterns, domain-specific gotchas, and established architectural approaches. Spawned by the learn command to run in parallel with other research agents. <example> Context: User is starting a new sno learn cycle for a billing system user: "/sno:learn" assistant: "I'll spawn parallel research agents including the prior art researcher to understand how billing systems are typically built." <commentary> Before applying DDD or designing data models, we need to understand what the problem domain actually looks like in practice — what patterns are standard, what pitfalls exist, what others have learned. </commentary> </example> <example> Context: User wants to build a job scheduling system user: "We need a distributed task scheduler with retries and dead-letter queues" assistant: "Let me research how scheduling systems are typically built — established patterns, known edge cases, prior art." <commentary> Scheduling is a well-studied domain with known patterns (cron, delay queues, sagas) and known gotchas (timezone handling, at-least-once delivery, clock skew). Research before design. </commentary> </example>
Use this agent during sno:learn to generate specific, targeted questions about gaps and ambiguities found by the other research agents. Synthesizes research into questions for the user. Spawned by the learn command after parallel research completes. <example> Context: Research agents have completed their analysis and found open questions user: "/sno:learn" assistant: "Research is done. Now I'll use the requirements interviewer to ask you targeted questions about the gaps we found." <commentary> After parallel research, this agent synthesizes open questions into a focused interview. </commentary> </example>
Use this agent during sno:learn to analyze service layer design — API boundaries, orchestration, transaction scoping, and cross-cutting concerns. Spawned by the learn command to run in parallel with other research agents. <example> Context: User is starting a new sno learn cycle user: "/sno:learn" assistant: "I'll spawn parallel research agents including the service layer analyst." <commentary> The learn phase needs service layer analysis to discover API boundaries, transaction patterns, and cross-cutting concerns before the spec is written. </commentary> </example>
Use this agent during sno:plan to review user experience considerations — UI flows, CLI ergonomics, error messages, and interaction patterns. Spawned by the plan command to run in parallel with the planner. <example> Context: User runs the plan command after learning phase is complete user: "/sno:plan" assistant: "I'll spawn parallel plan agents including the UX reviewer." <commentary> The plan phase benefits from dedicated UX analysis to ensure the implementation plan accounts for how users actually interact with the system. </commentary> </example>
Access thousands of AI prompts and skills directly in your AI coding assistant. Search prompts, discover skills, save your own, and improve prompts with AI.
Comprehensive toolkit for developing Claude Code plugins. Includes 7 expert skills covering hooks, MCP integration, commands, agents, and best practices. AI-assisted plugin creation and validation.
Orchestrate multi-agent teams for parallel code review, hypothesis-driven debugging, and coordinated feature development using Claude Code's Agent Teams
Context-Driven Development plugin that transforms Claude Code into a project management tool with structured workflow: Context → Spec & Plan → Implement
AI-supervised issue tracker for coding workflows. Manage tasks, discover work, and maintain context with simple CLI commands.