From groundwork
Runs 3-agent verification loop on complete task lists to ensure PRD coverage, architecture adherence, and design system compliance before implementation.
npx claudepluginhub etr/groundworkThis skill uses the workspace's default tool permissions.
Autonomous verification loop that runs 3 specialized agents to validate task list completeness and alignment before implementation begins.
Provides Ktor server patterns for routing DSL, plugins (auth, CORS, serialization), Koin DI, WebSockets, services, and testApplication testing.
Conducts multi-source web research with firecrawl and exa MCPs: searches, scrapes pages, synthesizes cited reports. For deep dives, competitive analysis, tech evaluations, or due diligence.
Provides demand forecasting, safety stock optimization, replenishment planning, and promotional lift estimation for multi-location retailers managing 300-800 SKUs.
Autonomous verification loop that runs 3 specialized agents to validate task list completeness and alignment before implementation begins.
Your current effort level is {{effort_level}}.
Skip this step silently if effort is high or higher AND you are Sonnet or Opus.
If effort is below high, you MUST show the recommendation prompt — regardless of model.
If you are not Sonnet or Opus, you MUST show the recommendation prompt - regardless of effort level.
Otherwise → use AskUserQuestion:
{
"questions": [{
"question": "Do you want to switch? Fix loop management across 3 validation agents benefits from consistent reasoning.\n\nTo switch: cancel, run `/effort high` (and `/model sonnet` if on Haiku), then re-invoke this skill.",
"header": "Recommended: Sonnet or Opus at high effort",
"options": [
{ "label": "Continue" },
{ "label": "Cancel — I'll switch first" }
],
"multiSelect": false
}]
}
If the user selects "Cancel — I'll switch first": output the switching commands above and stop. Do not proceed with the skill.
Before loading specs, ensure project context is resolved:
.groundwork.yml exist at the repo root?
{{project_name}} non-empty?
Skill(skill="groundwork:project-selector") to select a project, then restart this skill.{{project_name}}, specs at {{specs_dir}}/..groundwork.yml).AskUserQuestion:
"You're working from
<cwd>(inside [cwd-project]), but the selected Groundwork project is [selected-project] ([selected-project-path]/). What would you like to do?"
- "Switch to [cwd-project]"
- "Stay with [selected-project]" If the user switches, invoke
Skill(skill="groundwork:project-selector").
{{specs_dir}}/ paths will resolve to the correct location.Before invoking this skill, ensure:
Collect inputs for the agents:
task_list ← Read {{specs_dir}}/tasks.md (or {{specs_dir}}/tasks/ directory)
product_specs ← Read {{specs_dir}}/product_specs.md (or {{specs_dir}}/product_specs/ directory)
architecture ← Read {{specs_dir}}/architecture.md (or {{specs_dir}}/architecture/ directory)
design_system ← Read {{specs_dir}}/design_system.md (if exists, optional)
Detection: Check for file first (takes precedence), then directory. When reading a directory, aggregate all .md files recursively.
| Agent | Skip when |
|---|---|
design-task-alignment-checker | No design_system found AND no UI/frontend tasks in task list |
prd-task-alignment-checker and architecture-task-alignment-checker always run (their inputs are prerequisites).
Record skipped agents with verdict skipped.
Use Agent tool to launch all 3 agents in parallel:
Agent (subagent_type) | Context to Provide |
|---|---|
groundwork:prd-task-alignment-checker:prd-task-alignment-checker | task_list, product_specs |
groundwork:architecture-task-alignment-checker:architecture-task-alignment-checker | task_list, architecture |
groundwork:design-task-alignment-checker:design-task-alignment-checker | task_list, design_system |
Each returns JSON:
{
"summary": "One-sentence assessment",
"score": 0-100,
"findings": [{"severity": "critical|major|minor", "category": "...", "task_reference": "TASK-NNN", "finding": "...", "recommendation": "..."}],
"verdict": "approve|request-changes"
}
Present results in table format:
## Task List Validation Report
| Agent | Score | Verdict | Critical | Major | Minor |
|-------|-------|---------|----------|-------|-------|
| PRD Alignment | 92 | approve | 0 | 1 | 2 |
| Architecture Alignment | 88 | approve | 0 | 1 | 1 |
| Design Alignment | 85 | approve | 0 | 2 | 1 |
**Overall:** PASS / NEEDS FIXES
Rule: Continue this loop until ALL agents return approve.
On any request-changes verdict:
Log Iteration
## Validation Iteration [N]
| Agent | Verdict | Findings |
|-------|---------|----------|
| ... | ... | ... |
Fixing [X] issues...
Fix Each Finding - Apply each critical/major recommendation
Fix types:
Re-run Agent Validation — Re-launch ONLY agents that returned request-changes. Agents that approved retain their verdict unless the fix changed content in their domain:
For agents NOT re-run, carry forward their previous approve verdict and score.
Check Results
Track findings by key: [Agent]-[Category]-[TaskRef]
If same finding appears 3 times:
## Stuck - Need User Input
Issue persists after 3 attempts:
**[Agent] Finding description**
- Task: TASK-NNN
- Category: [category]
- Attempts:
1. [what was tried]
2. [what was tried]
3. [what was tried]
I need clarification: [specific question]
Also escalate when:
On PASS:
## Task List Validation PASSED
All 3 agents approved after [N] iteration(s).
| Agent | Score | Verdict | Summary |
|-------|-------|---------|---------|
| PRD Alignment | 95 | APPROVE | All requirements covered |
| Architecture Alignment | 92 | APPROVE | Tasks follow architecture |
| Design Alignment | 90 | APPROVE | UI tasks include a11y |
Issues fixed:
- [Iteration N] Agent: Description
Coverage Summary:
- PRD Requirements: [X]% covered
- Architecture Components: All referenced
- Design System: [Applied/N/A]
Minor suggestions (optional):
- ...
Return control to calling skill (tasks skill).
| Level | Action |
|---|---|
| critical | Must fix, loop continues |
| major | Must fix, loop continues |
| minor | Optional, does not block |
No design system:
Agent returns error:
All agents approve immediately: