From solo
Executes implementation tasks from docs/plan/plan.md using TDD workflow, commits changes via git, verifies hooks, and updates progress. Use after /plan in build pipeline.
npx claudepluginhub fortunto2/solo-factory --plugin soloThis skill is limited to using the following tools:
This skill is self-contained — follow the task loop, TDD rules, and completion flow below instead of delegating to external build/execution skills (superpowers, etc.).
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
This skill is self-contained — follow the task loop, TDD rules, and completion flow below instead of delegating to external build/execution skills (superpowers, etc.).
Execute tasks from an implementation plan. Finds plan.md (in docs/plan/), picks the next unchecked task, implements it with TDD workflow, commits, and updates progress.
git branch --show-current 2>/dev/nullgit status --short 2>/dev/null | head -10git log --oneline -3 2>/dev/nullAfter /plan has created a track with spec.md + plan.md. This is the execution engine.
Pipeline: /plan → /build → /deploy → /review
session_search(query) — find how similar problems were solved beforeproject_code_search(query, project) — find reusable code across projectscodegraph_query(query) — check file dependencies, imports, callersIf MCP tools are not available, fall back to Glob + Grep + Read.
Detect context — find where plan files live:
docs/plan/*/plan.md — standard locationconductor/ or any other directory — only docs/plan/.Load workflow config from docs/workflow.md (if exists):
docs/workflow.md missing: use defaults (moderate TDD, conventional commits).Verify git hooks are installed:
Read the stack YAML (templates/stacks/{stack}.yaml) — the pre_commit field tells you which system and what it runs:
husky + lint-staged → JS/TS stacks (eslint + prettier + tsc)pre-commit → Python stacks (ruff + ruff-format + ty)lefthook → mobile stacks (swiftlint/detekt + formatter)Then verify the hook system is active:
# husky
[ -f .husky/pre-commit ] && git config core.hooksPath | grep -q husky && echo "OK" || echo "NOT ACTIVE"
# pre-commit (Python)
[ -f .pre-commit-config.yaml ] && [ -f .git/hooks/pre-commit ] && echo "OK" || echo "NOT ACTIVE"
# lefthook
[ -f lefthook.yml ] && lefthook version >/dev/null 2>&1 && echo "OK" || echo "NOT ACTIVE"
If not active — install before first commit:
pnpm prepare (or npm run prepare)uv run pre-commit installlefthook installDon't use --no-verify on commits — if hooks fail, fix the issue and commit again.
$ARGUMENTS contains a track ID:{plan_root}/{argument}/plan.md exists (check docs/plan/).docs/plan/*/plan.md for partial matches, suggest corrections.$ARGUMENTS contains --task X.Y:plan.md files in docs/plan/.plan.md, find tracks with uncompleted tasks./plan first."codegraph_explain(project="{project name}")
Returns: stack, languages, directory layers, key patterns, top dependencies, hub files.
codegraph_repomap(project="{project name}")
Returns: a YAML map of the top files and their exported classes/functions. Use this to understand the global structure.
docs/plan/{trackId}/plan.md — task list (REQUIRED). Read the ## Context Handoff section first — it has a compact summary of intent, key files, decisions, and risks. This is your primary orientation.docs/plan/{trackId}/spec.md — acceptance criteria (REQUIRED)docs/workflow.md — TDD policy, commit strategy (if exists)CLAUDE.md — architecture, Do/Don't.solo/pipelines/progress.md — running docs from previous iterations (if exists, pipeline-specific). Contains what was done in prior pipeline sessions: stages completed, commit SHAs, last output lines. Use this to avoid repeating completed work.Do NOT read source code files at this stage. Only docs. Source files are loaded per-task in the execution loop (step 3 below).
If a task is marked [~] in plan.md:
Resuming: {track title}
Last task: Task {X.Y}: {description} [in progress]
1. Continue from where we left off
2. Restart current task
3. Show progress summary first
Ask via AskUserQuestion, then proceed.
Read references/context-engineering.md for full rules on observation masking, attention positioning, and plan recitation. Key points:
scratch/, keep 5-10 line summaryMakefile convention: If Makefile exists in project root, always prefer make targets over raw commands. Use make test instead of pnpm test, make lint instead of pnpm lint, make build instead of pnpm build, etc. Run make help (or read Makefile) to discover available targets. If a make integration or similar target exists, use it for integration testing after pipeline-related tasks.
IMPORTANT — All-done check: Before entering the loop, scan plan.md for ANY - [ ] or - [~] tasks. If ALL tasks are [x] — skip the loop entirely and jump to Completion section below to run final verification and output <solo:done/>.
For each incomplete task in plan.md (marked [ ]), in order:
Parse plan.md for first line matching - [ ] Task X.Y: (or - [~] Task X.Y: if resuming).
[ ] → [~] for current task.Do NOT grep the entire project or read all source files. Load only what this specific task needs.
If MCP available (preferred):
project_code_search(query="{task keywords}", project="{name}") — find relevant code in the project. Read only the top 2-3 results.session_search("{task keywords}") — check if you solved this before.codegraph_query("MATCH (f:File {project: '{name}'})-[:IMPORTS]->(dep) WHERE f.path CONTAINS '{module}' RETURN dep.path") — check imports/dependencies of files you'll modify.If MCP unavailable (fallback):
src/auth/**/*.ts), not the entire project.src/ or app/ — never **/*.Never do: Grep "keyword" . across the whole project. This dumps hundreds of lines into context for no reason. Be surgical.
Read references/quality-tools.md for full commands per stack. Quick reference:
| Stack | Lint | Format | Type-check | Pre-commit |
|---|---|---|---|---|
| Python | uv run ruff check --fix . | uv run ruff format . | uv run ty check . | uv run pre-commit run --all-files |
| JS/TS | pnpm lint --fix | pnpm format | pnpm tsc --noEmit | husky + lint-staged |
| iOS | swiftlint lint --strict | swift-format | — | lefthook |
| Android | ./gradlew detekt | ./gradlew ktlintCheck | — | lefthook |
Run after each task implementation, before git commit. If any fail, fix before proceeding.
Red — write failing test:
Green — implement:
Refactor:
If the task touches core business logic (pipeline, algorithms, agent tools), run make integration (or the integration command from docs/workflow.md). The CLI exercises the same code paths as the UI without requiring a browser. If make integration fails, fix before committing.
After implementation, run a quick visual smoke test if tools are available:
Web projects (Playwright MCP or browser tools): If you have Playwright MCP tools or browser tools available:
dev_server.command)iOS projects (simulator): If instructed to use iOS Simulator in the pipeline prompt:
xcodebuild -scheme {Name} -sdk iphonesimulator buildxcrun simctl install booted {app-path}xcrun simctl io booted screenshot /tmp/sim-screenshot.pngxcrun simctl spawn booted log stream --style compact --timeout 10Android projects (emulator): If instructed to use Android Emulator in the pipeline prompt:
./gradlew assembleDebugadb install -r app/build/outputs/apk/debug/app-debug.apkadb exec-out screencap -p > /tmp/emu-screenshot.pngadb logcat '*:E' --format=time -d 2>&1 | tail -20Graceful degradation: If browser/simulator/emulator tools are not available or fail — skip visual checks entirely. Visual testing is a bonus, never a blocker. Log that it was skipped and continue with the task.
Commit (following commit strategy):
git add {specific files changed}
git commit -m "<type>(<scope>): <description>"
Types: feat, fix, refactor, test, docs, chore, perf, style
Capture SHA after commit:
git rev-parse --short HEAD
SHA annotation in plan.md. After every task commit:
[~] → [x]- [x] Task X.Y: description <!-- sha:abc1234 -->Without a SHA, there's no traceability and no revert capability. If a task required multiple commits, record the last one.
After each task, check if all tasks in current phase are [x].
If phase complete:
[x] tasks in this phase. If any are missing <!-- sha:... -->, capture their SHA now from git log and add it. Every [x] task MUST have a SHA.### Verification for the phase.- [ ] → - [x].git commit -m "chore(plan): complete phase {N}".## Phase N: Title <!-- checkpoint:abc1234 -->.Phase {N} complete! <!-- checkpoint:abc1234 -->
Tasks: {M}/{M}
Tests: {pass/fail}
Linter: {pass/fail}
Verification:
- [x] {check 1}
- [x] {check 2}
Revert this phase: git revert abc1234..HEAD
Proceed to the next phase automatically. No approval needed.
Tests failing after Task X.Y:
{failure details}
1. Attempt to fix
2. Rollback task changes (git checkout)
3. Pause for manual intervention
Ask via AskUserQuestion. Do NOT automatically continue past failures.
When all phases and tasks are [x]:
pnpm builduv build or uv run python -m py_compile src/**/*.pypnpm buildpnpm buildxcodebuild -scheme {Name} -sdk iphonesimulator build./gradlew assembleDebug- [ ] criterion: verify it's met (search code, run command if specified).make task, cargo test, benchmark, passes N/N) → RUN IT. Do NOT skip as "unverifiable".- [ ] → - [x] for verified criteria.git add docs/plan/*/spec.md && git commit -m "docs: update spec checkboxes (verified by build)"If the same build/test error occurs 3+ times in a row, STOP retrying. Instead:
cargo clean, rm -rf node_modules, disk space check).<solo:redo/> to let review handle it.
Do NOT burn iterations on identical failures.Change **Status:** [ ] Not Started → **Status:** [x] Complete at the top of plan.md.
Output pipeline signal ONLY if pipeline state directory (.solo/states/) exists:
<solo:done/>
Do NOT repeat the signal tag elsewhere in the response. One occurrence only.
Track complete: {title} ({trackId})
Phases: {N}/{N}
Tasks: {M}/{M}
Tests: All passing
Phase checkpoints:
Phase 1: abc1234
Phase 2: def5678
Phase 3: ghi9012
Revert entire track: git revert abc1234..HEAD
Next:
/build {next-track-id} — continue with next track
/plan "next feature" — plan something new
SHA comments in plan.md enable surgical reverts:
Revert a single task:
# Find SHA from plan.md: - [x] Task 2.3: ... <!-- sha:abc1234 -->
git revert abc1234
Then update plan.md: [x] → [ ] for that task.
Revert an entire phase:
# Find checkpoint from phase heading: ## Phase 2: ... <!-- checkpoint:def5678 -->
# Find previous checkpoint: ## Phase 1: ... <!-- checkpoint:abc1234 -->
git revert abc1234..def5678
Then update plan.md: all tasks in that phase [x] → [ ].
Never use git reset --hard — always git revert to preserve history.
At the start of a build session, create a task list from plan.md so progress is visible:
[ ] and [~]).in_progress when starting a task, completed when done.These thoughts mean STOP — you're about to cut corners:
| Thought | Reality |
|---|---|
| "This is too simple to test" | Simple code breaks too. Write the test. |
| "I'll add tests later" | Tests written after pass immediately — they prove nothing. |
| "I already tested it manually" | Manual tests don't persist. Automated tests do. |
| "The test framework isn't set up" | Set it up. That's part of the task. |
| "This is just a config change" | Config changes break builds. Verify. |
| "I'm confident this works" | Confidence without evidence is guessing. Run the command. |
| "Let me just try changing X" | Stop. Investigate root cause first. |
| "Tests are passing, ship it" | Tests passing ≠ acceptance criteria met. Check spec.md. |
| "I'll fix the lint later" | Fix it now. Tech debt compounds. |
| "It works on my machine" | Run the build. Verify in the actual environment. |
Foundational principle: If you wouldn't mass-delete production data without checking, don't skip a test without evidence it's unnecessary.
head -50 or use --reporter=dot / -q flag. Thousands of test lines pollute context. Only show failures in detail. If output is large, use observation masking (write to scratch/, keep summary).Cause: No plan.md exists in docs/plan/.
Fix: Run /plan "your feature" first to create a track.
Cause: Implementation broke existing functionality. Fix: Use the error handling flow — attempt fix, rollback if needed, pause for user input. Never skip failing tests.
Cause: Tests or linter failed at phase boundary. Fix: Fix failures before proceeding. Re-run verification for that phase.