Provides specification workflow patterns like input assessment, produce-then-validate, and gap-informed revision for validated feature specs via State Analyst and Supervisor.
From humaninloopnpx claudepluginhub deepeshbodh/human-in-loop --plugin humaninloopThis skill uses the workspace's default tool permissions.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Executes pre-written implementation plans: critically reviews, follows bite-sized steps exactly, runs verifications, tracks progress with checkpoints, uses git worktrees, stops on blockers.
Workflow-specific patterns for the specification workflow. Consumed by the State Analyst alongside strategy-core to produce targeted Supervisor briefings.
Produce a validated feature specification where the advocate verdict is ready.
spec.md exists with user stories and functional requirementsadvocate-report.md verdict: readySparse input (missing Who/Problem/Value) typically benefits from enrichment before analysis. Detailed input with clear problem framing can go directly to the analyst. Consider the domain context — a sparse input in a well-understood domain may not need enrichment if the analyst can infer context from existing artifacts (constitution, codebase analysis).
Rationale: Enrichment adds a full interaction round with the user. When the input already contains sufficient signal, skipping enrichment saves time without sacrificing quality. The analyst is capable of working with moderately detailed input.
The primary pattern is: an agent produces an artifact, then a different agent (or gate) validates it. For specification, this means the analyst produces spec.md, the advocate reviews it. Resist the urge to skip validation even when the analyst report looks comprehensive.
Rationale: Self-review is unreliable. The analyst's blind spots are systematic — a separate reviewer with adversarial framing catches what the producer cannot see in their own work.
When the advocate identifies gaps, the next analyst pass should be informed by those specific gaps. The analyst should see what was flagged and focus revision effort there, not rewrite the entire spec. Use informed-by edges to carry gap context forward.
Rationale: Unfocused revision wastes effort on sections that were already adequate, and may introduce new issues in previously-clean areas. Targeted revision converges faster.
When gaps are knowledge-based (factual unknowns about the codebase, existing protocols, technical constraints), targeted research often resolves them faster than asking the user. Try research first; escalate to user only if research is inconclusive or the question is genuinely about preference.
Rationale: Users should not be asked questions that the codebase can answer. Research is faster and more precise for factual questions. Reserve user interaction for genuine decisions.