Spec-driven development workflow with structured requirements, design, and task phases
npx claudepluginhub habib0x0/spec-driven-plugin --plugin spec-drivenRun user acceptance testing against spec requirements for formal sign-off
Brainstorm a feature idea through conversation until it's ready for /spec
Run full post-completion pipeline (accept, docs, release, retro)
Generate user-facing documentation from spec and implementation
Execute one spec task by running Claude in autonomous mode
Loop spec execution until all tasks are complete
Refine requirements or design for an existing spec
Generate release notes, changelog, and deployment checklist
Run a retrospective on a completed spec to capture lessons learned
Show status and progress of current spec
Regenerate tasks from updated spec requirements and design
Validate spec completeness and consistency
Run post-deployment smoke tests against a live environment
Start a new spec-driven development workflow for a feature
Performs user acceptance testing by mapping completed tasks back to requirements and verifying traceability, non-functional requirements, and overall completeness. Produces a UAT report with pass/fail per acceptance criterion and a formal sign-off recommendation. Does NOT re-run functional tests (the spec-tester already handles that). Instead, reads tester results from tasks.md and focuses on what the tester does not cover: requirement traceability, non-functional verification, and formal acceptance. <example> Context: All tasks are complete and user wants to verify the right thing was built. user: "/spec-accept" assistant: "I'll run user acceptance testing against the spec requirements." <commentary> The acceptor reads requirements.md for EARS acceptance criteria, checks tasks.md for tester verification status, then evaluates traceability and non-functional requirements. </commentary> </example> <example> Context: User wants to verify a specific subset of requirements before full sign-off. user: "Can you verify just the authentication requirements?" assistant: "I'll run acceptance testing on the auth-related acceptance criteria." <commentary> The acceptor can filter by requirement ID or user story to test a subset. </commentary> </example>
Domain expert consultant that provides focused analysis on a specific topic during brainstorming. This is a parameterized agent — the spawning command passes the expert role, domain expertise, discussion context, and specific question via the prompt. Returns structured analysis to the Lead. <example> Context: During /spec-brainstorm with experts enabled, the Lead needs security input on an authentication design. assistant: Spawns spec-consultant with Security Expert persona and specific question about token storage. <commentary> The consultant receives its persona and context from the spawn prompt. It reads relevant codebase files, then returns a structured analysis with concerns, recommendations, and design constraints. </commentary> </example> <example> Context: During /spec-brainstorm with experts enabled, the Lead needs architecture input on a multi-service integration. assistant: Spawns spec-consultant with Software Architect persona and question about service boundaries. <commentary> The same agent definition is reused with a different persona. The architect consultant analyzes the codebase structure and returns recommendations about component boundaries and data flow. </commentary> </example>
Fixes issues when Tester or Reviewer reject an implementation. Fresh perspective on problems the Implementer couldn't solve.
Generates user-facing documentation from spec files and implemented code. Produces API references, user guides, and architecture decision records. <example> Context: Feature implementation is complete and user needs documentation. user: "/spec-docs" assistant: "I'll generate documentation from the spec and implementation." <commentary> The documenter reads requirements.md, design.md, and the actual code to produce comprehensive documentation targeted at end users and developers. </commentary> </example>
Implements code for a single task from the spec. Focuses only on writing code, not testing or reviewing.
Use this agent for the Requirements and Design phases of spec-driven development. This agent runs on Opus for deep reasoning about edge cases, security implications, and architectural tradeoffs. Examples: <example> Context: User has started a new spec and needs to create requirements and design. user: "I need to create a spec for user authentication" assistant: "I'll use the spec-planner agent to thoroughly analyze requirements and design the architecture." <commentary> User is starting spec creation. The planner agent uses Opus to deeply reason about requirements, identify edge cases, and design robust architecture. </commentary> </example> <example> Context: User is running /spec command and entering the requirements phase. user: "/spec payment-processing" assistant: "I'll use the spec-planner agent to carefully work through requirements and design for payment processing." <commentary> Payment processing is security-sensitive. The Opus model will catch edge cases and security considerations that faster models might miss. </commentary> </example> <example> Context: User wants to redesign part of their spec. user: "I need to rethink the architecture for our real-time notifications feature" assistant: "I'll use the spec-planner agent to analyze the design with deep reasoning." <commentary> Architectural redesign benefits from Opus's superior reasoning about tradeoffs and system design. </commentary> </example>
Reviews code quality, security, and architectural alignment after Tester verifies functionality. Uses Opus for deep reasoning about subtle issues.
Scans a codebase using LLM-driven heuristics to detect framework, patterns, entities, and registration points. Produces a persistent project profile that other agents read for wiring-aware implementation.
Use this agent to break down a completed spec design into implementation tasks. This agent runs on Sonnet for fast, structured task generation. Examples: <example> Context: User has completed requirements and design, now needs tasks. user: "The requirements and design are done, now break it down into tasks" assistant: "I'll use the spec-tasker agent to create structured implementation tasks from your spec." <commentary> Design is complete. Task breakdown is structured work that Sonnet handles efficiently -- read the spec, generate ordered tasks with dependencies. </commentary> </example> <example> Context: User runs /spec-tasks to regenerate tasks after spec changes. user: "/spec-tasks" assistant: "I'll use the spec-tasker agent to regenerate tasks from the updated spec." <commentary> Task regeneration follows a clear pattern. Sonnet is fast and accurate for this structured decomposition. </commentary> </example> <example> Context: Requirements and design phases just completed within /spec workflow. user: "Let's create the implementation tasks now" assistant: "I'll use the spec-tasker agent to break this down into trackable tasks." <commentary> Transitioning from design to tasks. The tasker agent picks up where the planner left off, using the structured spec to generate tasks. </commentary> </example>
Verifies that implemented tasks actually work. Uses Playwright for UI testing, runs test suites, and only marks Verified: yes after real verification.
Use this agent when you need to validate a spec for completeness, consistency, and implementation readiness. Examples: <example> Context: User has finished creating a spec and wants to verify it's ready for implementation. user: "I've finished the spec for user-authentication. Can you validate it?" assistant: "I'll use the spec-validator agent to check the spec for completeness and consistency." <commentary> User explicitly requests validation of a completed spec. The agent will check all three files for completeness and cross-reference consistency. </commentary> </example> <example> Context: User is about to start implementation and wants to ensure the spec is solid. user: "Before I start coding, can you check if the spec is complete?" assistant: "Let me validate the spec to ensure it's ready for implementation." <commentary> User wants pre-implementation validation. The agent should check requirements coverage, design completeness, and task traceability. </commentary> </example> <example> Context: User has made changes to requirements and wants to verify consistency. user: "I updated the requirements. Are they still consistent with the design?" assistant: "I'll validate the spec to check for any consistency issues between requirements and design." <commentary> After spec changes, validation ensures documents remain aligned and no gaps were introduced. </commentary> </example>
Battle-tested Claude Code plugin for engineering teams — 38 agents, 156 skills, 72 legacy command shims, production-ready hooks, and selective install workflows evolved through continuous real-world use
AI-powered development tools for code review, research, design, and workflow automation.
Core skills library for Claude Code: TDD, debugging, collaboration patterns, and proven techniques
Access thousands of AI prompts and skills directly in your AI coding assistant. Search prompts, discover skills, save your own, and improve prompts with AI.
Orchestrate multi-agent teams for parallel code review, hypothesis-driven debugging, and coordinated feature development using Claude Code's Agent Teams
Comprehensive toolkit for developing Claude Code plugins. Includes 7 expert skills covering hooks, MCP integration, commands, agents, and best practices. AI-assisted plugin creation and validation.