This skill should be used when transforming feature descriptions into well-structured project plans following conventions.
From soleurnpx claudepluginhub jikig-ai/soleur --plugin soleurThis skill uses the workspace's default tool permissions.
references/plan-community-discovery.mdreferences/plan-functional-overlap.mdreferences/plan-issue-templates.mdNote: The current year is 2026. Use this when dating plans and searching for recent documentation.
Transform feature descriptions, bug reports, or improvement ideas into well-structured markdown files issues that follow project conventions and best practices. This command provides flexible detail levels to match your needs.
<feature_description> #$ARGUMENTS </feature_description>
If the feature description above is empty, ask the user: "What would you like to plan? Please describe the feature, bug fix, or improvement you have in mind."
Do not proceed until you have a clear feature description from the user.
Load project conventions:
# Load project conventions
if [[ -f "CLAUDE.md" ]]; then
cat CLAUDE.md
fi
Branch safety check (defense-in-depth): Run git branch --show-current. If the result is main or master, abort immediately with: "Error: plan cannot run on main/master. Checkout a feature branch first." This check fires in all modes as defense-in-depth alongside PreToolUse hooks -- it fires even if hooks are unavailable (e.g., in CI).
Check for knowledge-base directory and load context:
Check if knowledge-base/ directory exists. If it does:
git branch --show-current to get the current branch namefeat-, read knowledge-base/project/specs/<branch-name>/spec.md if it existsIf knowledge-base/ exists:
CLAUDE.md if it exists - apply project conventions during planning# Project Constitution heading is NOT already in context, read knowledge-base/project/constitution.md - use principles to guide planning decisions. Skip if already loaded (e.g., from a preceding /soleur:brainstorm).feat-<name> pattern)knowledge-base/project/specs/feat-<name>/spec.md if it exists - use as planning inputfeat-<name>"If knowledge-base/ does NOT exist:
Check for brainstorm output first:
Before asking questions, look for recent brainstorm documents in knowledge-base/project/brainstorms/ that match this feature:
ls -la knowledge-base/project/brainstorms/*.md 2>/dev/null | head -10
Relevance criteria: A brainstorm is relevant if:
If a relevant brainstorm exists:
If multiple brainstorms could match: Use AskUserQuestion tool to ask which brainstorm to use, or whether to proceed without one.
If no brainstorm found (or not relevant), run idea refinement:
Refine the idea through collaborative dialogue using the AskUserQuestion tool:
Gather signals for research decision. During refinement, note:
Skip option: If the feature description is already detailed, offer: "Your description is clear. Should I proceed with research, or would you like to refine it further?"
Run these agents in parallel to gather local context:
What to look for:
knowledge-base/project/learnings/ that might apply (gotchas, patterns, lessons learned)These findings inform the next step.
Read plugins/soleur/skills/plan/references/plan-community-discovery.md now for the full community discovery procedure (stack detection, coverage gap check, agent-finder). Skip if no uncovered stacks detected.
Read plugins/soleur/skills/plan/references/plan-functional-overlap.md now for the functional overlap check procedure (always runs, spawns functional-discovery agent).
Based on signals from Step 0 and findings from Step 1, decide on external research.
High-risk topics → always research. Security, payments, external APIs, data privacy. The cost of missing something is too high. This takes precedence over speed signals.
Strong local context → skip external research. Codebase has good patterns, CLAUDE.md has guidance, user knows what they want. External research adds little value.
Uncertainty or unfamiliar territory → research. User is exploring, codebase has no examples, new technology. External perspective is valuable.
Announce the decision and proceed. Brief explanation, then continue. User can redirect if needed.
Examples:
Only run if Step 1.6 indicates external research is valuable.
Run these agents in parallel:
After all research steps complete, consolidate findings:
app/services/example_service.rb:42)knowledge-base/project/learnings/ (key insights, gotchas to avoid)Optional validation: Briefly summarize findings and ask if anything looks off or missing before proceeding to planning.
Title & Categorization:
feat: Add user authentication, fix: Cart total calculation)-plan suffix
feat: Add User Authentication → 2026-01-21-feat-add-user-authentication-plan.mdStakeholder Analysis:
Content Planning:
After generating the plan structure, assess which business domains this plan has implications for. This gate enforces constitution line 122: plans must receive cross-domain review before implementation.
Step 1 — Domain Sweep:
Brainstorm carry-forward check: If the brainstorm document (loaded in Phase 0.5) contains a ## Domain Assessments section, carry forward the findings. Extract relevant domains and their summaries. Skip fresh assessment.
Fresh assessment (if no brainstorm or no ## Domain Assessments section): Read plugins/soleur/skills/brainstorm/references/brainstorm-domain-config.md. Assess all 8 domains against the plan content in a single LLM pass using each domain's Assessment Question. Use semantic assessment — not keyword matching.
Spawn domain leaders: For each domain assessed as relevant except Product (handled in Step 2), spawn the domain leader as a blocking Task using the Task Prompt from brainstorm-domain-config.md, substituting {desc} with the plan summary. Spawn in parallel if multiple are relevant.
Collect findings: Wait for all domain leader Tasks to complete. Each returns a brief structured assessment. If a domain leader Task fails (timeout, error), write partial findings for that domain with Status: error and continue with remaining domains.
Step 2 — Product/UX Gate:
After Step 1 completes, if Product domain was flagged as relevant, run the existing three-tier classification:
A plan that discusses UI concepts but implements orchestration changes (e.g., adding a UX gate to a skill) is NONE.
On BLOCKING:
Pencil available: no in the Domain Review section, add ux-design-lead to **Skipped specialists:** with the user's justification, and display: "ux-design-lead skipped (Pencil MCP not available). Consider running wireframes manually before implementation."copywriter to **Agents invoked:**. If user declines, add copywriter to **Skipped specialists:** with the user's reason. If copywriter agent fails (timeout, error), add copywriter to **Skipped specialists:** with note (agent error — review manually) and set **Decision:** reviewed (partial). If no domain leader recommended a copywriter, skip this step silently. This gate also fires on ADVISORY tier when a domain leader recommended a copywriter — the recommendation is the signal, not the tier.Decision: reviewed (partial) and proceed. Do not block the plan on agent failure.On ADVISORY:
Tier: advisory, Decision: auto-accepted (pipeline), proceed silently.On NONE: Skip — no Product/UX Gate subsection needed beyond the domain sweep finding.
If Product domain was NOT flagged as relevant in the sweep, skip Step 2 entirely.
Writing the ## Domain Review section:
After both steps complete, write the ## Domain Review section to the plan file using the heading contract below.
## Domain Review Heading Contract:
## Domain Review
**Domains relevant:** [comma-separated list] | none
### [Domain Name] (one subsection per relevant non-Product domain)
**Status:** reviewed | error
**Assessment:** [leader's structured assessment summary]
### Product/UX Gate (only if Product domain relevant and tier is BLOCKING or ADVISORY)
**Tier:** blocking | advisory
**Decision:** reviewed | reviewed (partial) | skipped | auto-accepted (pipeline)
**Agents invoked:** spec-flow-analyzer, cpo, ux-design-lead, copywriter | [subset] | none
**Skipped specialists:** ux-design-lead (<reason>), copywriter (<reason>) | none
**Pencil available:** yes | no | N/A
#### Findings
[Agent findings summary]
When NO domains are relevant:
## Domain Review
**Domains relevant:** none
No cross-domain implications detected — infrastructure/tooling change.
Place after Acceptance Criteria, before Test Scenarios (or before the last major section). If the plan lacks an Acceptance Criteria heading, place before the last major section or at the end of the plan.
If spec-flow-analyzer was already invoked in Phase 2.5, skip this phase and proceed to Phase 4.
After planning the issue structure, run SpecFlow Analyzer to validate and refine the feature specification. SpecFlow is especially valuable for CI/workflow and infrastructure changes where bash conditional logic can silently drop edge cases that human review misses.
SpecFlow Analyzer Output:
Read plugins/soleur/skills/plan/references/plan-issue-templates.md now to load the three issue templates (MINIMAL, MORE, A LOT). Select the appropriate detail level based on complexity -- simpler is mostly better. Use the template structure from the reference file for the chosen level.
Content Formatting:
<details> tagsCross-Referencing:
Code & Examples:
# Good example with syntax highlighting and line references
```ruby
# app/services/user_service.rb:42
def process_user(user)
# Implementation here
end
```
# Collapsible error logs
<details>
<summary>Full error stacktrace</summary>
`Error details here...`
</details>
AI-Era Considerations:
Pre-submission Checklist:
knowledge-base/product/roadmap.md. A deferral without a tracking issue is invisible.Filename: Use the date and kebab-case filename from Step 2 Title & Categorization.
knowledge-base/project/plans/YYYY-MM-DD-<type>-<descriptive-name>-plan.md
Examples:
knowledge-base/project/plans/2026-01-15-feat-user-authentication-flow-plan.mdknowledge-base/project/plans/2026-02-03-fix-checkout-race-condition-plan.mdknowledge-base/project/plans/2026-03-10-refactor-api-client-extraction-plan.mdknowledge-base/project/plans/2026-01-15-feat-thing-plan.md (not descriptive - what "thing"?)knowledge-base/project/plans/2026-01-15-feat-new-feature-plan.md (too vague - what feature?)knowledge-base/project/plans/2026-01-15-feat: user auth-plan.md (invalid characters - colon and space)knowledge-base/project/plans/feat-user-auth-plan.md (missing date prefix)After writing the plan to knowledge-base/project/plans/, also create tasks.md if knowledge-base/ exists:
Check if knowledge-base/ exists. If so, run git branch --show-current to get the current branch. If on a feat-* branch, create the spec directory with mkdir -p knowledge-base/project/specs/<branch-name>.
If knowledge-base/ exists and on a feature branch:
Generate tasks.md using spec-templates skill template:
Save tasks.md to knowledge-base/project/specs/feat-<name>/tasks.md
Announce: "Tasks saved to knowledge-base/project/specs/feat-<name>/tasks.md. Use skill: soleur:work to implement."
Commit and push plan artifacts:
After both the plan file and tasks.md are written, commit and push everything:
git add knowledge-base/project/plans/ knowledge-base/project/specs/feat-<name>/tasks.md
git commit -m "docs: create plan and tasks for feat-<name>"
git push
If the push fails (no network), print a warning but continue.
If knowledge-base/ does NOT exist or not on feature branch:
knowledge-base/project/plans/ only (current behavior)After writing the plan file, automatically run /plan_review <plan_file_path> to get feedback from three specialized reviewers in parallel:
After review completes:
After plan review, use the AskUserQuestion tool to present these options:
Question: "Plan reviewed and ready at knowledge-base/project/plans/YYYY-MM-DD-<type>-<name>-plan.md. What would you like to do next?"
Options:
/deepen-plan - Enhance each section with parallel research agents (best practices, performance, UI)soleur:work - Begin implementing this plan locallysoleur:work on remote - Begin implementing in Claude Code on the web (use & to run in background)Based on selection:
open knowledge-base/project/plans/<plan_filename>.md to open the file in the user's default editor/deepen-plan → Call the /deepen-plan command with the plan file path to enhance with researchsoleur:work → Use skill: soleur:work with the plan file pathsoleur:work on remote → Use skill: soleur:work with knowledge-base/project/plans/<plan_filename>.md to start work in background for Claude Code webNote: If running soleur:plan with ultrathink enabled, automatically use skill: soleur:deepen-plan after plan creation for maximum depth and grounding.
Loop back to options after Simplify or Other changes until user selects soleur:work.
When user selects "Create Issue", detect their project tracker from CLAUDE.md:
Check for tracker preference in user's CLAUDE.md (global or project):
project_tracker: github or project_tracker: linearIf GitHub:
Use the title and type from Step 2 (already in context - no need to re-read the file):
gh issue create --title "<type>: <title>" --body-file <plan_path> --milestone "Post-MVP / Later"
After creation, read knowledge-base/product/roadmap.md and update the milestone if a more specific phase applies: gh issue edit <number> --milestone '<phase>'.
If Linear:
Read the plan file content, then run linear issue create --title "<title>" --description "<plan content>".
If no tracker configured: Ask user: "Which project tracker do you use? (GitHub/Linear/Other)"
project_tracker: github or project_tracker: linear to their CLAUDE.mdAfter creation:
skill: soleur:work or skill: soleur:plan-reviewUpdate an existing plan:
If re-running soleur:plan for the same feature, read the existing plan first. Update in place rather than creating a duplicate. Preserve prior content and mark changes with [Updated YYYY-MM-DD].
Archive completed plans:
Run bash ./plugins/soleur/skills/archive-kb/scripts/archive-kb.sh from the repository root. This moves matching artifacts to knowledge-base/project/plans/archive/ with timestamp prefixes, preserving git history. Commit with git commit -m "plan: archive <topic>".
NEVER CODE! Just research and write the plan.