From ideation
Transforms raw brain dumps, messy ideas, and stream-of-consciousness dictation into structured contracts, PRDs, and implementation specs saved to ./docs/ideation/{project-name}/. Use for feature planning and spec generation.
npx claudepluginhub nicknisi/claude-plugins --plugin ideationThis skill uses the workspace's default tool permissions.
Transform unstructured brain dumps into structured, actionable implementation artifacts through a confidence-gated workflow.
Refines rough ideas into executable specifications via collaborative questioning, alternative exploration, and incremental validation. Invoke before creative work or implementation.
Generates sectioned implementation plans from markdown spec files via research, simulated interviews, synthesis, planning, multi-LLM reviews. Resumes from existing files for complex features.
Generates interactive specs from ideas, codebases, or docs with R-numbered requirements, testable acceptance criteria, and mandatory user approval gates.
Share bugs, ideas, or general feedback.
Transform unstructured brain dumps into structured, actionable implementation artifacts through a confidence-gated workflow.
ALWAYS use the AskUserQuestion tool when asking clarifying questions. Do not ask questions in plain text. The tool provides structured options and ensures the user can respond clearly.
Use AskUserQuestion for:
INTAKE → CODEBASE EXPLORATION → CONTRACT FORMATION → PHASING → SPEC GENERATION → HANDOFF
↓ ↓ ↓ ↓ ↓
Understand confidence < 95%? PRDs or Repeatable? Analyze deps
existing code ↓ straight ↓ ↓
ASK QUESTIONS to specs? Template + Sequential?
↓ per-phase Parallel?
(loop until ≥95%) deltas Agent Team?
Accept whatever the user provides:
Don't require organization. The mess is the input.
Acknowledge receipt and begin analysis. Do not ask for clarification yet.
These rules apply during intake analysis (Phase 1) and contract formation (Phase 3). The goal is to challenge weak premises before they become expensive specs.
Do not say these during analysis or contract formation:
Before scoring confidence or generating any artifacts, understand the existing codebase. This is critical — specs written without understanding existing patterns, architecture, and conventions will be generic and wrong.
Exploration is needed when:
Skip exploration for greenfield projects with no existing code.
Use the Agent tool with subagent_type: "Explore" or direct Glob/Grep/Read to understand:
references/feedback-loop-guide.md for the full infrastructure-to-playground mapping.Retain exploration context for use in later phases. These inform:
Do not write exploration findings to files. They're context for the ideation process, not an artifact.
Extract from the raw input + codebase exploration:
Read references/confidence-rubric.md for detailed scoring criteria.
Score conservatively. When uncertain between two levels, choose the lower one. One extra round of questions costs minutes; a bad contract costs hours. Do not inflate scores to move forward faster.
Score each dimension (0-20 points):
| Dimension | Question |
|---|---|
| Problem Clarity | Do I understand what problem we're solving and why it matters? |
| Goal Definition | Are the goals specific and measurable? |
| Success Criteria | Can I write tests or validation steps for "done"? |
| Scope Boundaries | Do I know what's in and out of scope? |
| Consistency | Are there contradictions I need resolved? |
Total: /100 points
| Score | Action |
|---|---|
| < 70 | Major gaps. Ask 5+ questions targeting lowest dimensions. |
| 70-84 | Moderate gaps. Ask 3-5 targeted questions. |
| 85-94 | Minor gaps. Ask 1-2 specific questions. |
| ≥ 95 | Ready to generate contract. |
When confidence < 95%, MUST use AskUserQuestion tool to ask clarifying questions. Structure questions with clear options when possible.
Using AskUserQuestion effectively:
multiSelect: true when multiple answers applyQuestion strategy:
See references/confidence-rubric.md for question templates by dimension and best practices.
When confidence ≥ 95%, generate the contract document.
AskUserQuestion to confirm project name if not obvious from context./docs/ideation/{project-name}/./docs/ideation/{project-name}/contract.md already existsCreated date and rename it to contract-{created-date}.mdSupersedes field to the renamed file pathSupersedes to "None"contract.md using references/contract-template.mdAskUserQuestion to get approval:Question: "Does this contract accurately capture your intent?"
Options:
- "Approved" - Contract is accurate, proceed
- "Needs changes" - Some parts need revision
- "Missing scope" - Important items are not captured
- "Start over" - Fundamentally off track, re-analyze
If not approved: Revise the contract based on feedback. Do not re-score confidence unless the feedback reveals a fundamental misunderstanding — in that case, return to Phase 3.2 and re-score. Otherwise, edit contract.md directly and re-present for approval. Iterate until approved.
Do not proceed until contract is explicitly approved.
After contract is approved, determine phases and generate specs. PRDs are optional.
Use AskUserQuestion to ask:
Question: "How should we proceed from the contract?"
Options:
- "Straight to specs" — Recommended for technical projects.
Contract defines what, specs define how. Faster.
- "PRDs then specs" — Recommended for large scope or cross-functional
teams. Adds a requirements layer for stakeholder alignment.
Regardless of PRD choice, analyze the contract and break scope into logical implementation phases.
Small-project shortcut: If the scope is small enough to implement in a single phase (1-3 components, touches fewer than ~10 files), skip phasing entirely. Generate a single spec.md (no phase number needed) and proceed directly to handoff. Not every project needs multiple phases — don't force structure where simplicity suffices.
Phasing criteria (for multi-phase projects):
Typical phasing:
Detect repeatable patterns: If 3+ phases follow the same structure with different inputs (e.g., "add SDK support for {language}"), note this — it affects how specs are generated (see 4.4).
For each phase, generate prd-phase-{n}.md using references/prd-template.md.
Include:
Present all PRDs for review. Use AskUserQuestion:
Question: "Do these PRD phases look correct?"
Options:
- "Approved" - Phases and requirements look good, proceed to specs
- "Adjust phases" - Need to move features between phases
- "Missing requirements" - Some requirements are missing or unclear
- "Start over" - Need to revisit the contract
Iterate until user explicitly approves.
Generate specs using references/spec-template.md. How specs are generated depends on whether phases are repeatable:
For each phase, generate a full spec-phase-{n}.md with:
Reference existing code: When the codebase exploration (Phase 2) identified relevant patterns, include "Pattern to follow: path/to/similar/file.ts" in the spec's implementation details. This gives the executing agent concrete examples to follow.
Designing feedback loops: For each iterative component, define a playground (environment to interact with), experiment (parameterized check), and check command (fastest single validation). Match the feedback mechanism to the component type — data layers use tests, UI uses dev server, APIs use curl scripts, config/types skip loops entirely. See references/feedback-loop-guide.md for the full component-type mapping and design criteria.
Naming failure modes: For each non-trivial component, ask: "How would this fail?" Fill in the spec's Failure Modes table with named failures, data shadow paths (nil, empty, stale data), and edge cases (concurrent access, oversized input, missing permissions). The goal is not exhaustive error handling — it's ensuring the spec has no blind spots. Components that are trivial (config, types, constants) skip failure mode enumeration, same as feedback loops.
When multiple phases share the same structure (e.g., "add support for {SDK}"), avoid generating N nearly-identical full specs. Instead:
Generate one full template spec — spec-template-{pattern-name}.md — with detailed implementation steps, using placeholders for the variable parts.
Generate lightweight per-phase delta files — spec-phase-{n}.md — containing only:
spec-template-{pattern-name}.md with the inputs below"Example for SDK integrations:
spec-template-sdk-integration.md:
# SDK Integration Template
## Pattern
For each SDK, create:
1. `src/{language}/{language}-installer-agent.ts` — FrameworkConfig following existing pattern
2. `skills/workos-{sdk-name}/SKILL.md` — Agent skill fetching SDK README
3. `tests/fixtures/{language}/{framework}-example/` — Minimal project fixture
4. `tests/evals/graders/{language}.grader.ts` — Extending BaseGrader
## Implementation Details
{Detailed steps with {placeholders} for variable parts}
## Validation
{Common validation steps}
spec-phase-5.md:
# Spec: Ruby SDK (workos-ruby)
**Template**: ./spec-template-sdk-integration.md
## Inputs
- Language: Ruby
- Framework: Rails
- Package manager: Bundler (`bundle add`)
- Manifest file: Gemfile
- SDK package: workos
- Detection: `rails` gem in Gemfile or `config/routes.rb` exists
## Deviations from template
- Rails has strong conventions — files go in specific locations
- Initializer pattern: `config/initializers/workos.rb`
- Env vars: `.env` with dotenv-rails, or Rails credentials
## Phase-specific concerns
- CI needs Ruby 3.x installed for eval fixtures
This approach:
Whether using PRDs or straight-to-specs, present the phase breakdown and specs for user approval before proceeding to handoff.
Before presenting specs, evaluate feedback loop quality using the Spec Feedback Quality checklist from references/confidence-rubric.md. Self-review each spec:
If Weak, fix the gaps first. Don't present a spec without feedback loops for its iterative components.
Use AskUserQuestion:
Question: "Do these specs look correct?"
Options:
- "Approved" - Specs look good, proceed to execution handoff
- "Adjust approach" - Implementation strategy needs changes
- "Missing components" - Some files or steps are missing
- "Revisit phases" - Phase breakdown needs restructuring
If not approved, revise the relevant specs based on feedback and re-present. Iterate until approved.
After specs are generated, create task list, analyze orchestration options, and hand off for implementation.
Do not create tasks during ideation handoff — they are ephemeral and will be lost when the user starts a fresh session. Each /execute-spec session creates its own granular implementation tasks.
Analyze the phase dependency graph to determine the best execution strategy.
Detect parallelizable phases:
Determine recommended strategy:
| Pattern | Recommendation |
|---|---|
| All phases sequential (chain) | Sequential execution — one session at a time |
| 2+ independent phases | Agent team — lead orchestrates teammates in parallel |
| Mixed dependencies | Hybrid — sequential for dependent chain, agent team for independent group |
Append the ## Execution Plan section to the contract file (./docs/ideation/{project-name}/contract.md). This makes the contract fully self-contained — someone can pick it up cold and know exactly how to execute.
Use the Execution Plan section from the contract template. Fill in:
/execute-spec commands. Mark which are sequential vs parallel.Shared file detection: Before writing the agent team prompt, scan spec files' "Modified Files" sections. If multiple specs modify the same files, include a coordination note in the prompt:
Coordinate on shared files ({list}) to avoid merge conflicts —
only one teammate should modify a shared file at a time.
Batching: If more than 5 parallelizable phases, note in the execution steps to start with the highest-priority batch first.
After writing the execution plan, present a brief conversational summary.
Always include:
Ideation complete. Artifacts written to `./docs/ideation/{project-name}/`.
The contract includes the full execution plan — dependency graph,
commands, and agent team prompt (if parallel). Open `contract.md`
to pick up implementation from any session.
Then show the first step — either the first /execute-spec command for sequential execution, or a pointer to the agent team prompt in the contract for parallel execution.
Agent team context (include when the execution plan has an agent team prompt):
The agent team prompt is in the contract's Execution Plan section.
To use it: start a new Claude Code session, enter delegate mode
(Shift+Tab), and paste the prompt from the contract.
Agent teams let a single lead session automatically spawn and coordinate multiple teammates — the user starts one claude session, and the lead handles spawning, task assignment, plan approval, and synthesis. No manual terminal juggling.
Why delegate mode? Pressing Shift+Tab restricts the lead to coordination-only tools: spawning teammates, messaging, managing tasks, and approving plans. This prevents the lead from implementing tasks itself and ensures work is distributed to teammates.
Why one session? The lead automatically spawns each teammate as a separate Claude Code instance. Each teammate gets its own context window, loads project context (CLAUDE.md, MCP servers, skills), and works independently. You interact with the lead and it coordinates everything — use Shift+Up/Down to message individual teammates if needed.
Ensure agent teams are enabled in .claude/settings.json or ~/.claude/settings.json:
{
"env": {
"CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS": "1"
}
}
All artifacts written to ./docs/ideation/{project-name}/:
contract.md # Lean contract (problem, goals, success, scope)
prd-phase-1.md # Phase 1 requirements (only if PRDs chosen)
...
spec-phase-1.md # Phase 1 implementation spec (always full)
spec-template-{pattern}.md # Shared template for repeatable phases (if applicable)
spec-phase-{n}.md # Per-phase delta referencing template (if repeatable)
...
references/contract-template.md - Template for lean contract documentreferences/prd-template.md - Template for phased PRD documentsreferences/spec-template.md - Template for implementation specsreferences/confidence-rubric.md - Detailed scoring criteria for confidence assessment and spec feedback qualityreferences/feedback-loop-guide.md - Component-type mapping and design criteria for spec feedback loopsreferences/workflow-example.md - End-to-end workflow walkthroughCompleted artifact examples for reference when generating output:
examples/contract-example.md - A filled-in contract for a bookmark featureexamples/prd-example.md - A filled-in PRD for the same feature (Phase 1)examples/spec-example.md - A filled-in spec for the same featureWhen generating artifacts, reference these examples for tone, structure, and level of detail.
AskUserQuestion tool for clarifications and approvals. Never ask questions in plain text.