From flowstate
Transform brainstorm outputs, feature descriptions, or improvement ideas into well-structured implementation plans with TDD-structured tasks. Orchestrates parallel research (local + conditional external), spec-flow analysis, and plan writing. Triggers: "plan", "implement", "build this", or after a brainstorm session when ready to move to implementation.
npx claudepluginhub c-reichert/flowstate --plugin flowstateThis skill uses the workspace's default tool permissions.
<IMPORTANT>
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Transform brainstorm outputs, feature descriptions, or improvement ideas into well-structured implementation plans with TDD-structured tasks. Every plan is grounded in codebase research, past learnings, and (when warranted) external best practices.
Requires a feature description, brainstorm reference, or improvement idea. If none is provided, ask the user: "What would you like to plan? Describe the feature, bug fix, or improvement -- or point me to a brainstorm document."
Do not proceed until you have a clear feature description from the user.
Before asking questions, search for a recent brainstorm that matches this feature.
ls -la docs/brainstorms/*.md 2>/dev/null | head -20
Relevance criteria -- a brainstorm is relevant if:
topic: field) semantically matches the feature descriptionIf a relevant brainstorm exists:
(see brainstorm: docs/brainstorms/<filename>) when carrying forward conclusions. Do not paraphrase decisions in a way that loses their original context -- link back to the source.If multiple brainstorms could match: Use AskUserQuestion tool to ask which brainstorm to use, or whether to proceed without one.
If no brainstorm found (or not relevant), run idea refinement:
Refine the idea through collaborative dialogue using the AskUserQuestion tool:
Gather signals for Phase 2 research decision. During refinement, note:
Run these agents in parallel to gather local context:
What to look for:
| Agent | Focus |
|---|---|
| learnings-researcher (Haiku) | Search docs/solutions/ for relevant past solutions. Grep-first filtering: extract keywords, parallel grep calls by title/tags/module. Always check docs/solutions/patterns/critical-patterns.md. Score relevance: strong/moderate/weak. Full read of strong/moderate matches only. Return: distilled summaries with file paths and key insights. |
| repo-research-analyst (Sonnet) | Find existing patterns, CLAUDE.md guidance, similar implementations, technology familiarity, pattern consistency. Return: relevant file paths with line numbers, conventions to follow. |
These findings inform the next phase.
Based on signals from Phase 0 refinement and findings from Phase 1, decide on external research.
Decision tree:
| Signal | Action |
|---|---|
| High-risk topics (security, payments, external APIs, data privacy) | ALWAYS research -- the cost of missing something is too high |
| Strong local context (good patterns, CLAUDE.md has guidance, familiar tech, user knows what they want) | Skip external research -- local patterns suffice |
| Uncertainty or unfamiliar territory (user exploring, no codebase examples, new technology) | Research -- external perspective is valuable |
Announce the decision and proceed. Brief explanation, then continue. User can redirect if needed.
Examples:
If research needed, spawn in parallel:
Task best-practices-researcher(feature_description)
Task framework-docs-researcher(feature_description)
After research completes, spawn the spec-flow analyzer to validate and refine the feature specification:
Review spec-flow analysis results:
After all research phases complete, merge findings into a unified context:
app/services/example_service.py:42)docs/solutions/ with key insights and gotchas to avoidOptional validation: Briefly summarize consolidated findings and ask the user if anything looks off or missing before proceeding to plan writing.
Use the flowstate:writing-plans skill to structure the plan document.
Feed it the consolidated research from Phase 4 (learnings, patterns, best practices, edge cases) and the feature description from Phase 0. The skill defines the document format, detail levels, TDD task structure, learnings integration, and output template.
Before saving, invoke the flowstate:document-review skill for a structured self-review of the plan document. This catches completeness gaps, vague language, missing file paths, and YAGNI violations before the plan is committed.
Address any critical or important issues found by the review before proceeding.
mkdir -p docs/plans/
Use the Write tool to save the complete plan to docs/plans/YYYY-MM-DD-<type>-<descriptive-name>-plan.md.
Confirm: "Plan written to docs/plans/[filename]"
Then commit:
git add docs/plans/YYYY-MM-DD-<type>-<descriptive-name>-plan.md
git commit -m "docs: add plan for [brief description]"
Use the AskUserQuestion tool to present these options:
Question: "Plan ready at docs/plans/[filename]. What would you like to do next?"
Options:
/workflow:deepen-plan -- Enhance each section with parallel research agents (best practices, edge cases, code examples)/workflow:work -- Begin TDD implementation of this planBased on selection:
/workflow:deepen-plan -- Invoke the deepen-plan command with the plan file path/workflow:work -- Invoke the work command with the plan file pathLoop back to options after refinement until user selects a workflow command or chooses done.
When the user selects "Generate parallel session prompt", produce a comprehensive handoff prompt they can paste into a new Claude Code session (or use to spawn a parallel agent). The prompt must be self-contained — the new session has zero context from this one.
Include in the prompt:
docs/plans/...)docs/solutions/ and critical-patterns.md if they exist/workflow:work <plan-path> to execute/workflow:review after completion/workflow:compound to capture learningsFormat the prompt as a single copyable code block so the user can paste it directly.
Example structure:
You are implementing a feature in [repo]. The design and plan are complete.
## Context
- Repo: [path]
- Tech stack: [stack]
- Key decisions: [list]
## Instructions
1. Read the plan: [path]
2. Read the brainstorm for context: [path]
3. Check docs/solutions/ for relevant past learnings
4. Run `/workflow:work [plan-path]` to execute the plan with TDD
5. After completion, run `/workflow:review` then `/workflow:compound`
## Important
- [Any constraints, priorities, or warnings]
flowstate:writing-plans skill -- do not duplicate them here.