From solo
Creates specs and phased file-level implementation plans for features, bugs, refactors by researching codebase with search, graph queries, and context files.
npx claudepluginhub fortunto2/solo-factory --plugin soloThis skill is limited to using the following tools:
This skill is self-contained — follow the steps below instead of delegating to external planning skills (superpowers, etc.).
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
This skill is self-contained — follow the steps below instead of delegating to external planning skills (superpowers, etc.).
Research the codebase and create a spec + phased implementation plan. Zero interactive questions — explores the code instead.
Creates a track for any feature, bug fix, or refactor with a concrete, file-level implementation plan. Works with or without /setup.
session_search(query) — find similar past work in Claude Code chat historyproject_code_search(query, project) — find reusable code across projectscodegraph_query(query) — check dependencies of affected filescodegraph_explain(project) — architecture overview: stack, languages, directory layers, key patterns, top dependencies, hub fileskb_search(query) — search knowledge base for relevant methodologyIf MCP tools are not available, fall back to Glob + Grep + Read.
Parse task description from $ARGUMENTS.
Detect context — determine where plan files should be stored:
Project context (normal project with code):
package.json, pyproject.toml, Cargo.toml, *.xcodeproj, or build.gradle.kts exists in working directorydocs/plan/{trackId}/Knowledge base context (documentation-centric project):
docs/, notes/, or structured numbered directories existdocs/plan/{shortname}/Set $PLAN_ROOT based on detected context. All subsequent file paths use $PLAN_ROOT.
Load project context (parallel reads):
CLAUDE.md — architecture, constraints, Do/Don'tdocs/prd.md — what the product does (if exists)docs/workflow.md — TDD policy, commit strategy (if exists)package.json or pyproject.toml — stack, versions, depsAuto-classify track type from keywords in task description:
bugrefactorchorefeatureResearch phase — explore the codebase to understand what needs to change:
a. Get architecture overview (if MCP available — do this FIRST):
codegraph_explain(project="{project name from CLAUDE.md or directory name}")
Gives you: stack, languages, directory layers, key patterns, top dependencies, hub files.
b. Get RepoMap (if MCP available):
codegraph_repomap(project="{project name from CLAUDE.md or directory name}")
Gives you a YAML map of the most important files and their exported symbols (classes/functions).
c. Find relevant files — Glob + Grep for patterns related to the task:
d. Precedent retrieval (context graph pattern — search past solutions BEFORE planning):
session_search(query="{task description keywords}")
Look for: how similar tasks were solved, what went wrong, what patterns worked.kb_search(query="{task type}: {keywords}")
Check for: harness patterns, architectural constraints, quality scores.e. Search code across projects (if MCP available):
project_code_search(query="{relevant pattern}")
f. Check dependencies of affected files (if MCP available):
codegraph_query(query="MATCH (f:File {path: '{file}'})-[:IMPORTS]->(dep) RETURN dep.path")
g. Read existing tests in the affected area — understand testing patterns used.
h. Read CLAUDE.md architecture constraints — understand boundaries and conventions.
docs/ARCHITECTURE.md and docs/QUALITY_SCORE.md if they exist.i. Detect deploy infrastructure — search for deploy scripts/configs to include deploy phase in plan:
find . -maxdepth 3 \( -name 'deploy.sh' -o -name 'Dockerfile' -o -name 'docker-compose.yml' -o -name 'wrangler.toml' -o -name 'sst.config.ts' \) -type f 2>/dev/null
If found, read them to understand deploy targets. Include a deploy phase in the plan with concrete commands.
Detect overlapping plans — before creating a new track, check for existing plans that cover similar scope:
ls docs/plan/*/plan.md docs/plan/*/spec.md 2>/dev/null
For each existing plan found:
spec.md Summary and Acceptance CriteriaIf overlap detected:
[ ] tasks remain): recommend extending it instead of creating a new track. Show the user: "Existing track {trackId} covers similar scope ({overlap description}). Extend it or create a separate track?"[x] all tasks): proceed with new track but reference the prior track in spec.md DependenciesIf no overlap: proceed normally.
Generate track ID:
{shortname}_{YYYYMMDD} (e.g., user-auth_20260209).Create track directory:
mkdir -p $PLAN_ROOT
docs/plan/{trackId}/docs/plan/{shortname}/Generate $PLAN_ROOT/spec.md:
Based on research findings, NOT generic questions.
# Specification: {Title}
**Track ID:** {trackId}
**Type:** {Feature|Bug|Refactor|Chore}
**Created:** {YYYY-MM-DD}
**Status:** Draft
## Summary
{1-2 paragraph description based on research}
## Acceptance Criteria
- [ ] {concrete, testable criterion}
- [ ] {concrete, testable criterion}
{3-8 criteria based on research findings}
## Dependencies
- {external deps, packages, other tracks}
## Out of Scope
- {what this track does NOT cover}
## Technical Notes
- {architecture decisions from research}
- {relevant patterns found in codebase}
- {reusable code from other projects}
Generate $PLAN_ROOT/plan.md:
Concrete, file-level plan from research. Keep it tight: 2-4 phases, 5-15 tasks total.
Critical format rules (parsed by /build):
## Phase N: Name- [ ] Task N.Y: Description (with period or detailed text) - [ ] Subtask description[ ] (unchecked), [~] (in progress), [x] (done)# Implementation Plan: {Title}
**Track ID:** {trackId}
**Spec:** [spec.md](./spec.md)
**Created:** {YYYY-MM-DD}
**Status:** [ ] Not Started
## Overview
{1-2 sentences on approach}
## Phase 1: {Name}
{brief description of phase goal}
### Tasks
- [ ] Task 1.1: {description with concrete file paths}
- [ ] Task 1.2: {description}
### Verification
- [ ] {what to check after this phase}
## Phase 2: {Name}
### Tasks
- [ ] Task 2.1: {description}
- [ ] Task 2.2: {description}
### Verification
- [ ] {verification steps}
{2-4 phases total}
## Phase {N-1}: Deploy (if deploy infrastructure exists)
_Include this phase ONLY if the project has deploy scripts/configs (deploy.sh, Dockerfile, docker-compose.yml, wrangler.toml, sst.config.ts, vercel.json). Skip if no deploy infra found._
### Tasks
- [ ] Task {N-1}.1: {concrete deploy step — e.g. "Run python/deploy.sh to push Docker image to VPS", "wrangler deploy", etc.}
- [ ] Task {N-1}.2: Verify deployment — health check, logs, HTTP status
### Verification
- [ ] Service is live and healthy
- [ ] No runtime errors in production logs
## Phase {N}: Docs & Cleanup
### Tasks
- [ ] Task {N}.1: Update CLAUDE.md with any new commands, architecture changes, or key files
- [ ] Task {N}.2: Update README.md if public API or setup steps changed
- [ ] Task {N}.3: Remove dead code — unused imports, orphaned files, stale exports
### Verification
- [ ] CLAUDE.md reflects current project state
- [ ] Linter clean, tests pass
## Final Verification
- [ ] All acceptance criteria from spec met
- [ ] Tests pass
- [ ] Linter clean
- [ ] Build succeeds
- [ ] Documentation up to date
## Context Handoff
_Summary for /build to load at session start — keeps context compact._
### Session Intent
{1 sentence: what this track accomplishes}
### Key Files
{list of files that will be modified, from research}
### Decisions Made
{key architecture decisions from research phase — why X over Y}
### Risks
{known risks or edge cases discovered during research}
---
_Generated by /plan. Tasks marked [~] in progress and [x] complete by /build._
Plan quality rules:
Create progress task list for pipeline visibility:
After writing plan.md, create TaskCreate entries so progress is trackable:
/build will update these tasks as it works through them.If superpowers:writing-plans skill is available, follow its granularity format: bite-sized tasks (2-5 minutes each), complete code in task descriptions, exact file paths, verification steps per task. This enhances the built-in format above.
/buildIf "Edit plan": tell user to edit $PLAN_ROOT/plan.md manually, then run /build.
Track created: {trackId}
Type: {Feature|Bug|Refactor|Chore}
Phases: {N}
Tasks: {N}
Spec: $PLAN_ROOT/spec.md
Plan: $PLAN_ROOT/plan.md
Research findings:
- {key finding 1}
- {key finding 2}
- {reusable code found, if any}
Next: /build {trackId}
These thoughts mean STOP — you're skipping research:
| Thought | Reality |
|---|---|
| "I know this codebase" | You know what you've seen. Search for what you haven't. |
| "The plan is obvious" | Obvious plans miss edge cases. Research first. |
| "Let me just start coding" | 10 minutes of research prevents 2 hours of rework. |
| "This is a small feature" | Small features touch many files. Map the blast radius. |
| "I'll figure it out as I go" | That's not a plan. Write the file paths first. |
| "70 tasks should cover it" | 5-15 tasks. If you need more, split into tracks. |
/build parses: ## Phase N:, - [ ] Task N.Y:./build reads docs/workflow.md for TDD policy and commit strategy (if exists).docs/workflow.md missing, /build uses sensible defaults (moderate TDD, conventional commits).Cause: Feature scope too broad or tasks not atomic enough. Fix: Target 5-15 tasks across 2-4 phases. Split large features into multiple tracks.
Cause: Directory has both code manifests and KB-style directories.
Fix: Project context takes priority if package.json/pyproject.toml exists.
Cause: New project with minimal codebase or MCP tools unavailable. Fix: Skill falls back to Glob + Grep. For new projects, the plan will rely more on CLAUDE.md architecture and stack conventions.