Provides universal workflow patterns—validation, gap classification, pass evolution, halt escalation—for State Analyst to inform Supervisor briefings in agentic coding workflows.
From humaninloopnpx claudepluginhub deepeshbodh/human-in-loop --plugin humaninloopThis skill uses the workspace's default tool permissions.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Universal workflow patterns consumed by the State Analyst to inform Supervisor briefings. These are advisory patterns with rationale — not prescriptive rules. The Supervisor uses judgment informed by these patterns and may deviate when context warrants it.
Every agent output should pass through a gate before downstream consumption or milestone achievement. Even when confident in quality, validation catches blind spots. This is the most consistently valuable pattern across all workflows.
Rationale: Skipping validation is the single most common source of wasted iterations. A "perfect-looking" analyst output that bypasses the advocate gate frequently contains gaps that surface later — costing a full extra pass to fix.
Advocate rejections and gate verdicts carry different gap types that suggest different responses:
Knowledge gaps (factual unknowns, investigable questions) are often resolvable through targeted research without involving the user. Examples: "What auth protocol does the existing system use?", "What's the current database schema?"
Preference gaps (subjective choices, business decisions) require user input — research cannot resolve what only humans can decide. Examples: "Should notifications be opt-in or opt-out?", "What's the acceptable latency threshold?"
Scope gaps (too broad, multiple concerns, feasibility questions) may warrant halting or splitting rather than iterating within the current workflow. Examples: "This feature spans three bounded contexts", "The requirement conflicts with an architectural constraint."
Rationale: Treating all gaps identically leads to wasted effort — researching preferences yields nothing, asking users about codebase facts wastes their time, and iterating on scope issues without splitting never converges.
Early passes establish structure. Later passes refine based on feedback. By pass 3+, assess whether gaps are converging (good — the artifact is improving) or recurring (bad — something structural is wrong). Diminishing returns are a signal to surface the situation to the user.
Rationale: Recurring gaps indicate a mismatch between what the workflow can produce and what the user needs. More iterations won't fix a structural problem — only user intervention or scope adjustment will.
Never force-exit without user consent. When halting conditions arise, present the situation with clear options (continue iterating, accept current state, stop and review manually). The user always has final say on workflow termination.
Rationale: Autonomous halting destroys user trust. The workflow exists to serve the user's intent, and only the user can judge when "good enough" has been reached.