Execute specification-driven development workflows: specify features to generate validated specs, technical requirements, API contracts, TDD task lists via multi-agent orchestration; implement code in strict red-green-refactor cycles with brownfield integration; audit artifacts against project constitution for quality gates.
npx claudepluginhub deepeshbodh/human-in-loop --plugin humaninloopComprehensive artifact analysis with reviewer-friendly output mode. Consolidates checklist and analyze functionality.
Execute the implementation plan using DAG-based workflow execution
Execute the multi-agent implementation planning workflow with specialized agents and validation loops
Create or update the project constitution using the Principal Architect agent.
Create feature specification using DAG-based workflow execution
Execute the multi-agent task generation workflow with specialized agents and validation loops
[DEPRECATED] Use /humaninloop:plan instead. The techspec workflow has been merged into the unified plan command.
Adversarial reviewer who stress-tests specifications, planning artifacts, and task artifacts by finding gaps, challenging assumptions, and identifying edge cases. Asks the hard "what if" questions that prevent costly surprises during implementation. <example> Context: User has a feature spec they want reviewed before planning user: "Can you review this spec for gaps before we start planning?" assistant: "I'll use the devils-advocate to stress-test the specification and surface any missing requirements, ambiguities, or edge cases." <commentary> Spec review request triggers adversarial review of requirements completeness. </commentary> </example> <example> Context: Planning artifacts (research, data model, contracts) need validation user: "We finished the data model — is it ready for the next phase?" assistant: "I'll use the devils-advocate to review the data model for design gaps, cross-artifact consistency, and completeness." <commentary> Artifact readiness question triggers structured review with verdict. </commentary> </example> <example> Context: Task artifacts need validation before implementation begins user: "Review the task breakdown to make sure nothing is missing" assistant: "I'll use the devils-advocate to validate the task artifacts for vertical slice integrity, TDD structure, and traceability." <commentary> Task review request triggers adversarial validation of implementation plan. </commentary> </example>
Senior technical leader who brings governance judgment. Evaluates whether standards are enforceable, testable, and justified. Rejects vague aspirations in favor of actionable constraints. <example> Context: User is starting a new project and needs governance principles established user: "We need a constitution for this project. Set up the governance standards." assistant: "I'll use the principal-architect to establish enforceable governance principles with the Three-Part Rule: every standard gets enforcement, testability, and rationale." <commentary> Greenfield governance establishment is the principal-architect's core responsibility. </commentary> </example> <example> Context: User has technical artifacts and wants to verify they can actually be built together user: "We have requirements, constraints, and NFRs defined. Can this system actually be built as specified?" assistant: "I'll use the principal-architect to run a feasibility intersection review — checking for contradictions across the artifacts." <commentary> Cross-artifact feasibility review catches impossible combinations that no single artifact reveals. </commentary> </example> <example> Context: User has an existing codebase and wants to codify its patterns into governance user: "We need to formalize the standards for this legacy codebase without breaking what already works." assistant: "I'll use the principal-architect to analyze existing patterns and create a brownfield constitution that codifies what exists and requires what's missing." <commentary> Brownfield governance requires understanding existing patterns before imposing new standards. </commentary> </example>
Senior QA engineer who treats verification as an engineering discipline. Executes structured verification tasks, captures evidence, and gates cycle completion on human approval. <example> Context: Implementation is complete and needs verification before approval user: "Implementation is done. Run the verification tasks." assistant: "I'll use the qa-engineer to execute TEST: tasks, run quality gates, and present a checkpoint with evidence." <commentary> Post-implementation verification is the qa-engineer's core responsibility. </commentary> </example> <example> Context: User wants to verify a specific feature works against real infrastructure user: "Can you verify that the API endpoint returns the right response?" assistant: "I'll use the qa-engineer to execute the verification against real infrastructure and capture evidence." <commentary> Real infrastructure testing with evidence capture — not mocks, not assumptions. </commentary> </example> <example> Context: Quality gates need to be run as part of cycle verification user: "Run lint, build, and tests before we close this cycle." assistant: "I'll use the qa-engineer to execute quality gates and include results in the verification report." <commentary> Quality gate execution is deterministic verification work owned by the qa-engineer. </commentary> </example>
Senior analyst who transforms vague feature requests into precise, implementable specifications. Excels at eliciting requirements through structured discovery, identifying assumptions, and producing clear user stories with measurable acceptance criteria.
Staff Software Engineer who implements code via TDD discipline. Executes cycle task lists with red/green/refactor rigor, handles brownfield integration, and produces honest cycle reports. <example> Context: Supervisor dispatches Staff Engineer for a normal cycle execution user: "Read your instructions from: specs/001-feature/.workflow/context.md" assistant: "I'll read the context, parse the cycle task list, and execute each task through red/green/refactor — writing failing tests first, implementing to pass, then marking tasks complete and producing the cycle report." <commentary> Normal cycle execution: parse tasks, TDD each one, produce cycle-report.md. </commentary> </example> <example> Context: Supervisor dispatches Staff Engineer in fix mode after final-validation failure user: "Read your instructions from: specs/001-feature/.workflow/context.md" assistant: "I'll read the final-validation report, trace each failure to the responsible code, fix the specific issues without cycle boundary constraints, and produce a fix-pass cycle report." <commentary> Fix mode: unconstrained by cycle boundaries, scoped to specific failures from the validation report. </commentary> </example> <example> Context: Supervisor dispatches Staff Engineer for retry after checkpoint failure user: "Read your instructions from: specs/001-feature/.workflow/context.md" assistant: "I'll read the checkpoint report, identify which tasks failed, re-open only those tasks, fix them through TDD, and produce an updated cycle report with incremented attempt number." <commentary> Retry: targeted rework of failed tasks only, not full re-implementation. </commentary> </example>
Produces decision-ready briefings, manages DAG graph mechanics, and advances workflow state. Combines strategic analysis (briefings, report parsing, recommendations) with graph operations (node assembly, pass freezing, status updates). The Supervisor delegates all DAG operations and state analysis to this single agent. <example> Context: Supervisor needs a briefing and first node assembled at start of pass user: '{"action": "brief-and-assemble", "workflow": "specify", "pass_number": 2, "dag_path": "...", "catalog_path": "...", "feature_dir": "..."}' assistant: "I'll read the DAG, catalog, and strategy skills to produce a briefing, auto-select the top recommendation, assemble the node via hil-dag, construct the domain agent prompt, and return the combined result." <commentary> Start-of-pass action: briefing + auto-assembly in one call. </commentary> </example> <example> Context: Domain agent finished; Supervisor needs results parsed and next node assembled user: '{"action": "parse-and-advance", "node_id": "analyst-review", "pass_number": 1, "dag_path": "...", "catalog_path": "...", "feature_dir": "..."}' assistant: "I'll read the analyst's report, extract structured data, record via hil-dag, determine the next action, assemble the next node, and return the dispatch prompt." <commentary> Post-agent action: parse + record + auto-advance in one call. </commentary> </example> <example> Context: Supervisor collected user input for a decision node user: '{"action": "update-and-advance", "node_id": "human-clarification", "status": "decided", "answers": {...}, "dag_path": "...", "catalog_path": "...", "feature_dir": "..."}' assistant: "I'll write the answers to the artifact path, update the node status via hil-dag, and assemble the next recommended node." <commentary> Supervisor-owned node completion + auto-advance in one call. </commentary> </example>
Senior architect who transforms planning artifacts into implementation tasks through vertical slicing and TDD discipline. Produces task mappings and cycle-based task lists that enable incremental, testable delivery.
Senior systems engineer who bridges the gap between business specifications and technical implementation through requirements analysis AND concrete design decisions. Decomposes business intent into precise, traceable technical requirements, then transforms those requirements into entity models, API contracts, and technology decisions. <example> Context: User has a completed business specification and needs technical analysis and design user: "We have the spec for user authentication. We need to break it down technically and design the system." assistant: "I'll use the technical-analyst to translate the business specification into technical requirements, constraints, decisions, NFRs, data model, and API contracts." <commentary> Business spec needs full analysis-to-design translation in a unified workflow. </commentary> </example> <example> Context: A feature spec mentions "the system should be fast" without measurable targets user: "The spec says users expect fast responses but doesn't define what fast means technically." assistant: "I'll use the technical-analyst to define measurable non-functional requirements from the business expectations." <commentary> Vague business expectations need translation into measurable technical targets. </commentary> </example> <example> Context: A specification references external services without documenting integration details user: "The spec mentions Stripe for payments and SendGrid for email but doesn't cover failure scenarios." assistant: "I'll use the technical-analyst to map system integrations with protocols, failure modes, and fallback strategies." <commentary> External dependencies need systematic cataloguing with failure mode analysis before design. </commentary> </example> <example> Context: Technical requirements are complete and design artifacts are needed user: "Requirements and constraints are locked. Now we need the data model and API contracts." assistant: "I'll use the technical-analyst to produce the data model with sensitivity annotations and API contracts with integration boundaries." <commentary> Design work builds on analysis artifacts — same agent maintains full context. </commentary> </example>
Senior interface designer who analyzes visual inspiration from existing apps to extract design patterns, build actionable design systems, and craft screen layouts and interaction flows for projects. <example> Context: User has screenshots from apps they admire and wants to build a design system user: "I have screenshots from Stripe's dashboard and Linear's sidebar. Can you extract a design system from these?" assistant: "I'll use the ui-designer to analyze those screenshots and extract a cohesive design system with tokens, components, and layout patterns." <commentary> User has concrete inspiration screenshots and needs structured design extraction — core ui-designer territory. </commentary> </example> <example> Context: User wants to design a multi-screen flow based on inspiration from a mobile app user: "I love how Notion handles its page creation flow on mobile. Can you help me design something similar for my app?" assistant: "I'll use the ui-designer to analyze Notion's flow and create an interaction flow map with screen layouts adapted to your project." <commentary> Flow design from reference app requires both screenshot analysis and flow mapping expertise. </commentary> </example> <example> Context: User wants to understand the design patterns behind a web application user: "Can you break down this screenshot of Figma's toolbar? I want to understand the spacing, typography, and component patterns." assistant: "I'll use the ui-designer to produce a detailed component inventory and design token extraction from that screenshot." <commentary> Detailed pattern extraction from a single screenshot — the agent's foundational capability. </commentary> </example>
This skill MUST be invoked when the user says "analyze codebase", "scan project", "detect tech stack", "codebase analysis", "collision risk", or "brownfield". SHOULD also invoke when user mentions "existing code" or "project context".
This skill MUST be invoked when the user says "brainstorm", "deep analysis", "let's think through", "analyze this with me", or "help me think through". SHOULD also invoke when feature descriptions lack Who/Problem/Value clarity during specification enrichment.
This skill MUST be invoked when the user says "analyze screenshot", "extract design tokens", "pull colors from screenshot", "component inventory", "break down this UI", or "design extraction". SHOULD also invoke when user mentions "screenshot", "color palette", "typography", "spacing", or "component catalog".
This skill MUST be invoked when the user says "review spec", "find gaps", "what's missing", or "clarify requirements". SHOULD also invoke when reviewing spec.md for completeness. Focuses on product decisions and generates clarifying questions with concrete options.
This skill MUST be invoked when the user says "write principles", "define governance", "create constitution", or "write a constitution". SHOULD also invoke when user mentions "governance", "principles", "enforcement", or "amendment process". Core skill for greenfield projects.
This skill MUST be invoked when the user says "assemble design system", "build design system", "merge extractions", "consolidate tokens", "unify design tokens", or "synthesize design system". SHOULD also invoke when user mentions "design system", "token consolidation", "component normalization", or "multi-screenshot synthesis".
This skill MUST be invoked when the user says "write requirements", "define success criteria", "identify edge cases", or "functional requirements". SHOULD also invoke when user mentions "FR-", "SC-", "RFC 2119", "MUST SHOULD MAY", or "edge cases". Produces technology-agnostic requirements in FR-XXX format with measurable success criteria.
This skill MUST be invoked when the user says "roadmap", "gap analysis", "evolution plan", "brownfield gaps", or "improvement priorities". Use for creating evolution roadmaps and identifying improvement priorities for brownfield projects.
This skill MUST be invoked when the user says "write technical requirements", "define constraints", "define NFRs", "map integrations", "classify data sensitivity", or "infrastructure requirements". SHOULD also invoke when user mentions "TR-", "C-", "NFR-", "IP-", "INT-", "DS-", "non-functional", "system integration", "infrastructure provisioning", or "data classification". Produces three traceable analysis artifacts from business specifications.
This skill MUST be invoked when the user says "write user stories", "define acceptance criteria", "prioritize features", "user story", "acceptance scenario", or "Given When Then". SHOULD also invoke when user mentions "priority", "P1", "P2", "P3", or "backlog". Produces prioritized user stories with independently testable acceptance scenarios.
This skill MUST be invoked when the user says "create constitution for existing codebase", "codify existing patterns", "brownfield constitution", "essential floor", or "emergent ceiling". SHOULD also invoke when user mentions "brownfield", "evolution roadmap", or "legacy project". Extends authoring-constitution.
This skill MUST be invoked when the user says "brownfield integration", "extend existing code", or "modify existing file". SHOULD also invoke when encountering tasks with `[EXTEND]` or `[MODIFY]` markers, implementing against existing codebases, or integrating with established interfaces.
This skill MUST be invoked when the user says "execute cycle", "implement tasks", "TDD cycle", or "red green refactor". SHOULD also invoke when implementing cycle task lists, encountering `[EXTEND]`/`[MODIFY]` markers, generating cycle reports, or handling retry after checkpoint failure.
This skill MUST be invoked when the user says "design API", "map endpoints", "define schemas", "API contract", "REST API design", or "OpenAPI spec". SHOULD also invoke when user mentions "endpoint", "schema", "contract", or "HTTP".
This skill MUST be invoked when the user says "extract entities", "define data model", "model relationships", "entity modeling", or "domain model". SHOULD also invoke when user mentions "relationship", "cardinality", "state machine", or "data attributes".
This skill MUST be invoked when the user says "map the flow", "connect these screens", "build user journey", "navigation mapping", "interaction flow", or "flow diagram". SHOULD also invoke when user mentions "screen transitions", "user flow", "navigation architecture", "entry points", "dead ends", or "orphaned screens". Provides structured procedure for connecting analyzed screenshots into coherent interaction flows with navigation logic and journey definitions.
This skill MUST be invoked when the user says "interface design", "UI design", "component design", "visual design", "styling", "dark mode", "spacing", "typography hierarchy", or "surface elevation". SHOULD also invoke when user mentions "frontend aesthetics" or "UI components".
This skill MUST be invoked when the user says "evaluate alternatives", "make technology choice", "document decision", "technology choice", "trade-offs", "decision record", "rationale", or "why we chose". SHOULD also invoke when user mentions "alternatives" or "NEEDS CLARIFICATION".
This skill MUST be invoked when the user says "create task mapping", "structure implementation", "define cycles", "vertical slice", "TDD", "test first", "cycle structure", or "testable increment". SHOULD also invoke when user mentions "red green refactor" or "implementation tasks".
Universal workflow patterns (validation, gap classification, pass evolution, halt escalation) consumed by the State Analyst to inform Supervisor briefings.
This skill MUST be invoked when the user says "implementation strategy", "cycle sequencing", or "implement workflow patterns". SHOULD also invoke when user mentions "execute-then-verify", "targeted retry", "fix pass", or "implementation escalation". Provides implementation-workflow patterns consumed alongside strategy-core for targeted briefings.
Specification-workflow patterns (input assessment, produce-then-validate, gap-informed revision) consumed by the State Analyst alongside strategy-core for targeted Supervisor briefings.
This skill MUST be invoked when the user says "sync CLAUDE.md", "update agent instructions", "propagate constitution changes", "CLAUDE.md sync", or "constitution alignment". SHOULD also invoke when user mentions "agent instructions" updates.
This skill MUST be invoked when the user says "TEST:", "TEST:VERIFY", "TEST:CONTRACT", "execute verification", "run test task", or "verification test". SHOULD also invoke when the user says "Setup/Action/Assert", "verify against infrastructure", or "capture evidence".
This skill MUST be invoked when the user says "create worktree", "isolated workspace", "parallel branch work", "git worktree", "feature isolation", or "branch workspace". SHOULD also invoke when starting feature work that needs isolation from current workspace.
This skill MUST be invoked when the user says "report a bug", "create issue", "log issue", "file a bug", "raise an issue", "create bug", or "feature request". Use for GitHub issue creation, lifecycle management, triage, and structured issue tracking.
This skill MUST be invoked when the user says "review constitution", "validate principles", "check quality", "constitution review", "quality check", "version bump", "anti-patterns", or "constitution audit".
This skill MUST be invoked when the user says "review research", "review data model", "review contracts", "plan quality", "phase review", or "design gaps". SHOULD also invoke when user mentions "artifact review" or "planning validation".
This skill MUST be invoked when the user says "review task mapping", "review tasks", "validate cycles", or "check TDD structure". SHOULD also invoke when user mentions "task quality", "cycle review", "vertical slice", or "task artifact".
Battle-tested Claude Code plugin for engineering teams — 38 agents, 156 skills, 72 legacy command shims, production-ready hooks, and selective install workflows evolved through continuous real-world use
Comprehensive skill pack with 66 specialized skills for full-stack developers: 12 language experts (Python, TypeScript, Go, Rust, C++, Swift, Kotlin, C#, PHP, Java, SQL, JavaScript), 10 backend frameworks, 6 frontend/mobile, plus infrastructure, DevOps, security, and testing. Features progressive disclosure architecture for 50% faster loading.
Comprehensive PR review agents specializing in comments, tests, error handling, type design, code quality, and code simplification
Tools to maintain and improve CLAUDE.md files - audit quality, capture session learnings, and keep project memory current.
Core skills library for Claude Code: TDD, debugging, collaboration patterns, and proven techniques
Access thousands of AI prompts and skills directly in your AI coding assistant. Search prompts, discover skills, save your own, and improve prompts with AI.