From ideation
Executes ideation-generated implementation specs: scouts codebase for context map, builds components with feedback loops, runs verify-review-fix cycle with Reviewer agent before committing.
npx claudepluginhub nicknisi/claude-plugins --plugin ideationThis skill is limited to using the following tools:
Execute a spec file generated by the ideation skill.
Transforms ideas into structured specifications (requirements, design, tasks) before implementation. Use when building features, fixing bugs, refactoring, or designing systems.
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Share bugs, ideas, or general feedback.
Execute a spec file generated by the ideation skill.
Parse arguments:
--parallel flag: Enable parallel execution via subagentsExample: /ideation:execute-spec --parallel or /ideation:execute-spec docs/ideation/foo/spec-phase-1.md
If argument provided: Read the spec file directly.
If no argument: Auto-detect from task list:
TaskList to find existing phase tasksstatus: pending (not started)blockedBy (dependencies completed)specFileTaskGet to read the task's specFile from metadata or descriptionFallback (no tasks found): Search for specs manually:
./docs/ideation/*/spec-phase-*.md
If multiple found, use AskUserQuestion to select one.
Invoke the Scout agent to explore the codebase and produce a structured context map. The scout replaces manual codebase exploration — it runs as a read-only subagent, scores implementation readiness, and persists its findings.
Determine the project directory from the spec file path. If the spec is at docs/ideation/my-project/spec-phase-1.md, the project directory is docs/ideation/my-project/.
Invoke the Scout using the Agent tool:
plugins/ideation/agents/scout.md to get the scout's full workflow and output formatAgent tool with:
general-purposescout.md as the agent's instructions, followed by the specific inputs: spec file path, project directory, phase number, and whether a prior context-map.md existsNote on tool restrictions: The scout's frontmatter declares tools: ["Read", "Glob", "Grep"], but when invoked as a general-purpose subagent, these restrictions are policy-based (enforced by the prompt), not mechanism-based. The scout prompt instructs read-only behavior.
The scout may perform up to 2 internal exploration rounds before reaching a verdict. Execute-spec waits for the final output — it does not re-invoke the scout.
After the scout completes, parse the scout's text response for the context map and verdict. Write the context map to {project-directory}/context-map.md using the Write tool (the scout cannot write files itself — it returns the map as text).
If scout returns GO (confidence >= 70):
If scout returns HOLD (confidence < 70):
AskUserQuestion:Question: "Scout confidence is {score}/100 (below 70 threshold). Gaps: {summary of lowest dimensions}. How to proceed?"
Options:
- "Proceed anyway" — Build with known gaps. May require more iteration.
- "Update spec" — The spec may be underspecified. Pause to revise.
- "Abort" — Stop execution for this phase.
If the user chose "Proceed anyway" after HOLD: The context map (if produced) may have gaps. During build, treat missing context map sections as unavailable and read files directly for those areas. Pay extra attention to the Risks section of a partial context map.
If no scout agent is available (agent file missing or invocation fails): Fall back to inline exploration and log a warning. The inline fallback is:
Grep to check what imports or references the modified files (blast radius)CLAUDE.md or project README for conventionsExtract from the spec file (and template if applicable):
Also extract and retain for the review cycle:
If tasks already exist (detected in Step 1 from TaskList):
in_progress and proceed to executionIf no tasks exist (fresh execution):
Use TaskCreate to create structured tasks from the spec's Implementation Details:
For each component, create a task with:
After creating all component tasks, add dependency relationships using TaskUpdate with addBlockedBy — if Component B depends on Component A, mark B as blocked by A's task ID.
Create validation tasks (blocked by all component tasks):
Parse the spec's Implementation Details for component order:
Before implementing any component, establish the spec's feedback environment. This is a one-time setup.
Read the Feedback Strategy section from the spec. Identify the playground type and inner-loop command.
Auto-detect feedback infrastructure — even if the spec has no Feedback Strategy section, probe the codebase:
package.json (or equivalent manifest) for scripts: test, dev, start, storybook, typecheckjest.config.*, vitest.config.*, .mocharc.*, pytest.ini, go.mod (for go test)vite.config.*, next.config.*, webpack.config.*.storybook/ directoryscripts/, bin/, MakefileSet up the playground if it requires infrastructure:
Verify the inner-loop command runs (even if it does nothing yet). This catches environment issues before they block implementation.
Fallback: If no Feedback Strategy in the spec AND no feedback infrastructure detected, fall back to Validation Commands as the post-implementation check. Not all specs will have feedback loops.
Before starting, use TaskList to see current state:
status: pending and empty blockedBy (ready to work)completed by previous sessionsConsult the context map: Before reading pattern files or exploring the codebase during build, check the scout's context map for:
TaskUpdate({ taskId, status: "in_progress" })TaskUpdate({ taskId, status: "completed" })If the component has no feedback loop: Fall back to the original flow — implement fully, then run validation commands.
After implementing a component:
completed via TaskUpdateTaskList to find next unblocked taskIf validation fails:
in_progress (don't mark completed)--parallel flag)Default (no flag): Sequential execution — work through unblocked tasks one at a time.
With --parallel flag: Spawn subagents for independent components:
TaskList for all tasks with status: pending and empty blockedBygeneral-purposedefaultin_progress for too long, the parent session should check TaskList, read the task description for context, and either retry or ask the user.Review cycle in parallel mode: Subagents only build — they do not run their own review cycles. After all subagents complete their components, the main session runs a single verify-review-fix loop on the combined diff (git diff HEAD covers all changes from all sessions). This avoids the problem of interleaved diffs from multiple sessions writing to the same working tree.
After all component tasks are completed, enter the review cycle. Code is not committed until the review passes or the user explicitly accepts remaining issues.
Do not stage files until after the review passes. The reviewer uses git diff HEAD to see all changes. Keep changes unstaged during the review cycle so the diff is clean and complete.
Run all commands from the spec's "Validation Commands" section:
If any validation command fails, fix the issue before proceeding to review. Do not invoke the reviewer with failing validations — those are mechanical errors, not review findings. Validation failures do not consume a review cycle.
If git diff is empty (no changes to review): Skip the review cycle entirely. Report that all components were no-ops and proceed to the completion report.
Invoke the Reviewer agent using the Agent tool:
plugins/ideation/agents/reviewer.md to get the reviewer's full workflow and output formatAgent tool with:
general-purposereviewer.md as the agent's instructions, followed by:
Note on tool restrictions: The reviewer's frontmatter declares tools: ["Read", "Grep", "Bash"], but when invoked as a general-purpose subagent, these restrictions are policy-based. The reviewer prompt instructs it to only use Bash for git diff HEAD commands and to never edit files.
Cycle counter rule: The cycle number increments only when the reviewer is invoked. Verify failures and their fix iterations do not count as review cycles. Cycle N means the reviewer has been invoked N times.
Parse the reviewer's output:
**Verdict**: PASS or **Verdict**: FAILcritical/, high/, medium/, low/critical AND zero high → PASScritical or high → FAILIf the reviewer fails, returns empty output, or returns output with no verdict line: Fall back to validation-only mode — treat validation command results as sufficient. Log a warning that the review cycle was skipped due to reviewer failure. Continue to commit.
Review passed. Proceed to commit and completion report.
Report medium and low findings to the user for awareness, but they do not block the commit.
critical and high finding from the reviewer's output→ action part of each finding)Escalation. The review has failed 3 times. Present remaining findings to the user via AskUserQuestion:
Question: "Review cycle 3 still has {N} critical/high findings. How to proceed?"
Options:
- "Fix manually" — You fix the remaining issues yourself. Re-run /execute-spec after fixing to re-enter the review cycle.
- "Accept with issues" — Commit with known issues. Findings included in completion report as acknowledged items.
- "Abort" — Do not commit. Leave changes unstaged for manual review.
If "Fix manually": Stop execution. User will fix and re-invoke.
If "Accept with issues": Proceed to commit. Include all unresolved findings in the completion report under "Acknowledged Issues."
If "Abort": Stop execution. Do not commit. Report the current state.
Only reached after PASS or user acceptance:
git add -A)## Phase {N} Implementation Complete
### Implemented
- {List of components implemented}
### Files Changed
- {List of files created/modified}
### Review Summary
- Cycles: {N} of 3 max
- Findings addressed: {count} ({critical} critical, {high} high auto-fixed)
- Remaining (non-blocking): {count} ({medium} medium, {low} low)
- Acknowledged issues: {count, if user accepted with issues}
### Validation Results
- Type check: PASS/FAIL
- Lint: PASS/FAIL
- Tests: PASS/FAIL
### Acceptance Criteria
- [x] {Met criteria}
- [ ] {Unmet criteria with notes}
### Next Steps
- Review changes: `git log -1 --stat`
- For next phase: `/ideation:execute-spec spec-phase-{N+1}.md`