This skill should be used when executing work plans efficiently while maintaining quality and finishing features.
From soleurnpx claudepluginhub jikig-ai/soleur --plugin soleurThis skill uses the workspace's default tool permissions.
references/work-agent-teams.mdreferences/work-lifecycle-parallel.mdreferences/work-subagent-fanout.mdExecute a work plan efficiently while maintaining quality and finishing features.
This command takes a work document (plan, specification, or todo file) and executes it systematically. The focus is on shipping complete features by understanding requirements quickly, following existing patterns, and maintaining quality throughout.
If $ARGUMENTS contains --headless, set HEADLESS_MODE=true. Strip --headless from $ARGUMENTS before processing the remainder as a plan path. Pipeline mode (file path detection) already covers all prompt bypasses for work's own prompts — --headless is only needed for forwarding to child skills in Phase 4.
<input_document> #$ARGUMENTS </input_document>
Load project conventions:
# Load project conventions
if [[ -f "CLAUDE.md" ]]; then
cat CLAUDE.md
fi
Clean up merged worktrees (silent, runs in background):
Navigate to the repository root, then run bash ./plugins/soleur/skills/git-worktree/scripts/worktree-manager.sh cleanup-merged. Report cleanup results: how many worktrees were cleaned up, which branches remain active.
Check for knowledge-base directory and load context:
Check if knowledge-base/ directory exists. If it does:
git branch --show-current to get the current branch namefeat-, read knowledge-base/project/specs/<branch-name>/tasks.md if it existsIf knowledge-base/ exists:
CLAUDE.md if it exists - apply project conventions during implementation# Project Constitution heading is NOT already in context, read knowledge-base/project/constitution.md - apply principles during implementation. Skip if already loaded (e.g., from a preceding /soleur:plan).feat-<name> pattern)knowledge-base/project/specs/feat-<name>/tasks.md if it exists - use as work checklist alongside TodoWritefeat-<name>"If knowledge-base/ does NOT exist:
Run these checks before proceeding to Phase 1. A FAIL blocks execution with a remediation message. A WARN displays and continues. If all checks pass, proceed silently.
Environment checks:
git branch --show-current. If the result is empty (detached HEAD), FAIL: "Detached HEAD state -- checkout a feature branch or create a worktree." If the result is the default branch (main or master), FAIL: "On default branch -- create a worktree before starting work. Run: bash ./plugins/soleur/skills/git-worktree/scripts/worktree-manager.sh feature <name>"pwd. If the path does NOT contain .worktrees/, WARN: "Not in a worktree directory. You can create one via git-worktree skill in Phase 1."git status --short. If output is non-empty, WARN: "Uncommitted changes detected. Consider committing or stashing before starting new work."git stash list. If output is non-empty, WARN: "Stashed changes found. Review stash list to avoid forgotten work."Scope checks:
.md or starts with a path-like pattern), verify it exists and is readable. If not, FAIL: "Plan file not found at the specified path." If the input appears to be a text description rather than a file path, WARN: "Input appears to be a description, not a file path. Scope validation limited."git diff --name-only HEAD...origin/main to identify files that diverged between this branch and main. If output is non-empty, WARN: "Branch has diverged from main in [N] files: [file list]. Consider merging main before starting." If the git command fails (e.g., offline, no remote), skip this check silently.## Domain Review or ## UX Review heading (both are accepted for backward compatibility). If NEITHER heading found: scan the plan content for UI file patterns (page.tsx, layout.tsx, template.tsx, .jsx, .vue, .svelte, .astro, +page.svelte, app/, pages/, components/, layouts/, routes/). If UI patterns found, WARN: "Plan references UI files but has no Domain Review section. Consider running /soleur:plan to add domain review before implementing." If either heading IS present: pass silently.Design artifact checks:
git ls-files '*.pen' '*.fig' '*.sketch' | grep -i "<feature-name>" and check knowledge-base/product/design/ for related files. If design artifacts exist AND the current tasks include UI/page implementation (patterns: .njk, .html, .tsx, .jsx, .vue, .svelte, pages/, components/, layouts/): store the artifact paths as DESIGN_ARTIFACTS for use in Phase 2.Specialist review checks:
If a plan file was provided (check 5 passed) and a ## Domain Review section exists with a ### Product/UX Gate subsection: check whether domain leader assessments recommended specialists (copywriter, ux-design-lead, conversion-optimizer) that are NEITHER listed in **Agents invoked:** NOR in **Skipped specialists:**. If the **Decision:** field says reviewed (partial), WARN: "Domain review was partial — some specialist agents failed. Review the Domain Review section before proceeding." If any recommended specialist is missing from both fields: Interactive mode: FAIL with message listing the missing specialists and options: (a) "Run <specialist> now" — invoke the specialist agent directly, update the plan file's **Agents invoked:** field, then continue; (b) "Skip with justification" — prompt for reason, add to the plan file's **Skipped specialists:** field, then continue. Pipeline mode (headless/one-shot): auto-invoke each missing specialist agent. If the agent succeeds, add to **Agents invoked:**. If it fails, add to **Skipped specialists:** with note (auto-skipped — agent unavailable in pipeline) and WARN. Do not FAIL in pipeline mode. If all recommended specialists are accounted for (in **Agents invoked:** or **Skipped specialists:**): pass silently.
UX artifact commit checkpoint (after each specialist in check 9): After each specialist agent completes successfully (interactive "Run specialist now" or pipeline auto-invoke), commit the output:
git status --short to discover new/modified files from the specialistgit add <discovered files>git commit -m "wip: <specialist-name> artifacts for <feature-name>"Each specialist gets its own commit so partial progress is preserved if a later specialist fails. Do not commit on specialist failure.
On FAIL: Display the failure message with remediation steps and stop. Do not proceed to Phase 1.
On WARN only: Display all warnings together and proceed to Phase 1.
On all pass: Proceed silently to Phase 1.
Pipeline detection: If $ARGUMENTS contains a file path (ends in .md or matches a path-like pattern), this skill is running in pipeline mode (invoked by one-shot or another orchestrator). In pipeline mode, skip all interactive approval gates and proceed directly. If $ARGUMENTS is empty or a plain text description, this is interactive mode — keep the approval gates below.
Read Plan and Clarify
Setup Environment
First, check the current branch by running git branch --show-current. Then determine the default branch by running git symbolic-ref refs/remotes/origin/HEAD and extracting the branch name. If that fails, check whether origin/main exists (fallback to master).
If already on a feature branch (not the default branch):
[current_branch], or create a new branch?"If on the default branch, you MUST create a worktree before proceeding. Never edit files on the default branch -- parallel agents cause silent merge conflicts, and this repo uses core.bare=true where git pull and git checkout are unavailable.
Create a worktree for the new feature:
bash ./plugins/soleur/skills/git-worktree/scripts/worktree-manager.sh --yes create feature-branch-name
Then cd into the worktree path printed by the script. The worktree manager handles bare-repo detection, branch creation from latest origin/main, .env copying, and dependency installation.
Use a meaningful name based on the work (e.g., feat-user-authentication, fix-email-validation).
Create Todo List (TDD-First Structure)
Structure tasks as RED/GREEN/REFACTOR units, not as "implement everything, then test":
blockedBy dependency (GREEN blocked by RED)Anti-pattern to avoid: Creating a task list like [implement A, implement B, implement C, ..., write tests, lint]. This structure guarantees TDD violation because the agent executes tasks in order. The correct structure is [RED: test A, GREEN: implement A, RED: test B, GREEN: implement B, ..., lint].
Execution Mode Selection (HARD GATE — must complete before executing ANY task)
Do NOT execute any task before completing this analysis. Analyze independence first, select the execution tier, then begin. Starting sequential execution "because the first tasks feel simple" is a workflow violation — it forfeits parallelization savings on the remaining tasks.
Before starting the sequential task loop, check for parallelization opportunities:
Step 0: Tier 0 pre-check (Lifecycle Parallelism)
Read the plan. Apply a single judgment: "Does this plan have distinct code and test workstreams that can be assigned to separate agents with non-overlapping file scopes?"
Read plugins/soleur/skills/work/references/work-lifecycle-parallel.md now for the full Tier 0 protocol (offer/auto-select, generate contract, spawn 2 agents, collect/commit, test-fix-loop, docs). If Tier 0 executes, proceed directly to Phase 3 after completing Step 06 of the protocol. If declined, fall through to Step 1.
Step 1: Analyze independence
Read the TaskList. Identify tasks that have no blockedBy dependencies and reference
different files or modules (no obvious file overlap). Count the independent tasks.
If fewer than 3 independent tasks exist, skip to Tier C: Sequential below.
If 3+ independent tasks exist, proceed through the tiers in order (A, then B, then C). Each tier either executes or falls through to the next.
Pipeline mode override: If running in pipeline mode (plan file argument detected in Phase 1), auto-select Tier 0 if eligible (Step 0 above). If Tier 0 is ineligible, skip Tier A entirely and auto-accept Tier B without prompting. Do not present "Run as Agent Team?" or "Run in parallel?" questions -- proceed directly to Step B2 of the Subagent Fan-Out protocol if 3+ independent tasks exist, otherwise fall through to Tier C.
Tier A: Agent Teams (highest capability, ~7x token cost)
Read plugins/soleur/skills/work/references/work-agent-teams.md now for the full Agent Teams protocol (offer, activate, spawn teammates, monitor/commit/shutdown). If declined or failed, fall through to Tier B.
Tier B: Subagent Fan-Out (fire-and-gather, moderate cost)
Read plugins/soleur/skills/work/references/work-subagent-fanout.md now for the full Subagent Fan-Out protocol (offer, group/spawn, collect/integrate). If declined, fall through to Tier C.
Tier C: Sequential (default)
Proceed to the task execution loop below.
Task Execution Loop
Design Artifact Gate (before first UI task): If DESIGN_ARTIFACTS was set in Phase 0.5, spawn the ux-design-lead agent with the artifact paths and ask it to produce an implementation brief (see ux-design-lead "Wireframe-to-Implementation Handoff" workflow). The brief is a structured description of every section, its content, and its layout — this becomes the binding input for all UI tasks. Do not write any markup until the brief is received.
UX artifact commit checkpoint (after Design Artifact Gate): After the implementation brief is received, commit before proceeding to UI tasks:
git status --short to discover the implementation brief and any generated design filesgit add <discovered files>git commit -m "wip: UX implementation brief for <feature-name>"This checkpoint ensures the implementation brief survives session crashes.
For each task in priority order:
while (tasks remain):
- Mark task as in_progress in TodoWrite
- Read any referenced files from the plan
- If task creates UI/pages: verify implementation brief exists (HARD GATE)
- TDD GATE: (see below)
- Look for similar patterns in codebase
- RED: Write failing test(s) for this task's acceptance criteria
- GREEN: Write minimum code to make the test(s) pass
- REFACTOR: Improve code while keeping tests green
- Run full test suite after changes
- Mark task as completed in TodoWrite
- Mark off the corresponding checkbox in the plan file ([ ] → [x])
- Evaluate for incremental commit (see below)
TDD Gate (HARD GATE): Before writing ANY implementation code for a task, determine if the task has testable behavior:
Skipping this gate — writing implementation before tests — is a workflow violation equivalent to committing directly to main. The rationalization "this is simple enough to not need test-first" is exactly the reasoning TDD is designed to prevent.
Test environment setup: If the project's test runner cannot run the type of test needed (e.g., React component tests require jsdom but vitest is configured for node), set up the test environment BEFORE starting the task. This is part of RED — the test infrastructure must exist for the test to fail properly.
IMPORTANT: Always update the original plan document by checking off completed items. Use the Edit tool to change - [ ] to - [x] for each task you finish. This keeps the plan as a living document showing progress and ensures no checkboxes are left unchecked.
Incremental Commits
After completing each task, evaluate whether to create an incremental commit:
| Commit when... | Don't commit when... |
|---|---|
| Logical unit complete (model, service, component) | Small part of a larger unit |
| Tests pass + meaningful progress | Tests failing |
| About to switch contexts (backend → frontend) | Purely scaffolding with no behavior |
| About to attempt risky/uncertain changes | Would need a "WIP" commit message (exception: UX artifacts use wip: prefix) |
| UX specialist produces artifacts (wireframes, copy, brief) | Specialist is still generating (mid-output) |
| Domain leader review cycle completes (feedback applied) | Review feedback not yet incorporated |
| Brand guide alignment pass completes | Alignment still in progress |
Heuristic: "Can I write a commit message that describes a complete, valuable change? If yes, commit. If the message would be 'WIP' or 'partial X', wait."
UX artifact heuristic: "Did a specialist just produce or revise artifacts? If yes, commit with wip: UX <description> for feat-X. UX artifacts are high-effort and low-recoverability -- err on the side of committing too often rather than too rarely."
The wip: prefix is intentional -- UX artifacts are valuable at every revision stage, and WIP commits are squashed on merge with no impact on final git history. Do not run compound before UX WIP commits -- compound runs once in Phase 4.
Commit workflow:
# 1. Verify tests pass (use project's test command)
# Examples: bin/rails test, npm test, pytest, go test, etc.
# 2. Stage only files related to this logical unit (not `git add .`)
git add <files related to this logical unit>
# 3. Commit with conventional message
git commit -m "feat(scope): description of this unit"
Handling merge conflicts: If conflicts arise during rebasing or merging, resolve them immediately. Incremental commits make conflict resolution easier since each commit is small and focused.
Note: Incremental commits use clean conventional messages without attribution footers. The final Phase 4 commit/PR includes the full attribution.
Follow Existing Patterns
Test Continuously
/atdd-developer skill for detailed TDD guidance.Infrastructure Validation
When any task modifies files in apps/*/infra/, run these checks after each change (in addition to or instead of the app test suite):
cloud-init schema: For each modified cloud-init.yml:
cloud-init schema -c <file> -- validates YAML syntax AND cloud-init schema in one step. Warnings about missing datasource are expected; only non-zero exit codes are failures. If cloud-init is not installed locally, warn and continue.
Terraform format: For each infra directory with modified .tf files:
terraform fmt -check <dir> -- exit 0 means formatted; exit 3 means violations. Fix with terraform fmt <dir>.
Terraform validate: For each infra directory with modified .tf files:
terraform init -backend=false then terraform validate -- catches HCL syntax errors and undefined references without requiring provider credentials.
These checks replace the "tests may be skipped" exemption for infra files. If any check fails, fix before proceeding to the next task.
Track Progress
Trigger: This phase runs when the plan's deliverables are knowledge-base research artifacts (findings, analysis, audits, research briefs) that produce recommendations targeting other existing documents. Skip for code-only plans.
Detection: After Phase 2 completes, scan the outputs for recommendation patterns — "should rewrite," "needs updating," "add to," "change X in Y.md," or any finding that names a specific target file. If found, enter the loop.
The loop:
while (recommendations exist that haven't been applied):
1. CASCADE: Apply all recommendations to their target artifacts
- Rewrite questions in interview guides
- Update framings in brand guide
- Add alternatives to pricing strategy
- Any finding that names a file → edit that file
2. VALIDATE: Re-run the same research methodology against updated artifacts
- Use the same personas/parameters as the original run
- Produce a before/after comparison (original → current)
3. CHECK: Did the validation surface NEW weak spots or recommendations?
- If yes → apply fixes, loop back to step 2
- If no (at synthetic ceiling) → exit loop
4. UPDATE BRIEF: Update the research brief with final validated results
- Executive summary reflects current state, not original findings
- Recommendations marked as "Applied" with results
- Add Cascade Status section tracking all changes to all files
5. SUMMARIZE: Present founder summary
- Key findings table
- All files changed table (file, what changed, before/after metrics)
- Remaining limitations (structural, not fixable)
Exit condition: The loop exits when a validation round produces no new actionable recommendations — only structural limitations that can't be fixed by rewording (e.g., a persona's archetype inherently produces flat responses to a specific question type).
Max iterations: 3 rounds. If the third round still produces actionable recommendations, present them to the user rather than looping indefinitely. Synthetic-on-synthetic validation has diminishing returns.
Why this matters: Without this loop, research sprints produce findings that sit in briefs without updating the documents they target. The founder has to manually ask "was any action taken?" after each round. This loop makes cascade + validate + re-cascade automatic.
Run Core Quality Checks
Always run before submitting:
# Run full test suite (use project's test command)
# Examples: bin/rails test, npm test, pytest, go test, etc.
# Run linting (per CLAUDE.md)
# Use linting-agent before pushing to origin
Consider Reviewer Agents (Optional)
Use for complex, risky, or large changes:
Run reviewers in parallel with Task tool:
Task(code-simplicity-reviewer): "Review changes for simplicity"
Task(kieran-rails-reviewer): "Check Rails conventions"
Present findings to user and address critical issues.
Final Validation
Implementation is complete. Before handing off, run the Playwright-first audit, then determine invocation mode.
Scan any "next steps", "setup instructions", or "to use this" text you are about to output. For each step that involves a browser action (account creation, credential generation, settings configuration, form submission, OAuth flow, portal navigation):
If you catch yourself writing phrases like "set up X in the browser", "go to the portal and...", or "manually configure..." — stop and attempt Playwright first. This audit is mandatory; skipping it is a deviation.
If invoked by one-shot (the conversation contains soleur:one-shot skill output earlier): Do not invoke ship, review, or compound — the orchestrator handles those. Output exactly ## Work Phase Complete (this is a continuation marker, NOT a turn-ending statement) and then immediately continue executing the next numbered step in the one-shot sequence (step 4: review). Do NOT end your turn after outputting this marker.
If invoked directly by the user (no one-shot orchestrator): Continue through the post-implementation pipeline automatically. Do NOT stop and wait — the earlier learning "Workflow Completion is Not Task Completion" applies. Run these steps in order, forwarding --headless if HEADLESS_MODE=true:
skill: soleur:review (or skill: soleur:review --headless if headless) — catch issues before shippingskill: soleur:resolve-todo-parallel — resolve any review findings (no --headless needed; this skill has no interactive prompts)skill: soleur:compound (or skill: soleur:compound --headless if headless) — capture learnings before committingskill: soleur:ship (or skill: soleur:ship --headless if headless) — commit, push, create PR, mergeskill: soleur:review after completing implementationskill: soleur:compound before creating a PRBefore entering Phase 4, verify these Phase 2-3 items are complete:
After Phase 4 handoff (one-shot only), the orchestrator handles: /review, /resolve-todo-parallel, /compound, /ship.
Don't use by default. Use reviewer agents only when:
For most features: tests + linting + following patterns is sufficient.