Create an implementation spec for a feature or task
Creates implementation specifications by researching codebase patterns, interviewing for decisions, and writing structured specs.
/plugin marketplace add vieko/sessions/plugin install vieko-sessions@vieko/sessionsA hybrid approach using subagents: research in isolated context, interview in main context, write in isolated context.
Run git rev-parse --show-toplevel to locate the repository root.
Read <git-root>/.bonfire/config.json if it exists.
Specs location: Read specsLocation from config. Default to .bonfire/specs/ if not set.
Get the topic from $ARGUMENTS or ask if unclear.
Check for existing context:
<git-root>/.bonfire/index.md for project stateSPEC.md or spec.md in git root (user's spec template)Progress: Tell the user "Researching codebase for patterns and constraints..."
Use the Task tool to invoke the codebase-explorer subagent for research.
Provide a research directive with these questions:
Research the codebase for implementing: [TOPIC]
Find:
1. **Patterns**: How similar features are implemented, existing abstractions to reuse, naming conventions
2. **Constraints**: Dependencies, API boundaries, performance considerations
3. **Potential Conflicts**: Files that need changes, intersections with existing code, migration concerns
Return structured findings only - no raw file contents.
Wait for the subagent to return findings before proceeding.
The subagent runs in isolated context (haiku model, fast), preserving main context for interview.
After the subagent returns, validate the response:
Valid response contains at least one of:
## Patterns Found with content## Key Files with entries## Constraints Discovered with itemsOn valid response: Proceed to Step 5.
On invalid/empty response:
Glob("**/*.{ts,js,py,go}") to find code filesGrep("pattern-keyword")On subagent failure (timeout, error):
For very large codebases, exploration may need multiple passes. The Task tool returns an agentId you can use to resume.
When to offer resume:
To resume exploration:
resume parameter:
Example multi-pass scenario:
Progress: Tell the user "Starting interview (3 rounds: core decisions, edge cases, testing & scope)..."
Using the research findings, interview the user with informed questions via AskUserQuestion.
Progress: "Round 1/3: Core decisions..."
Ask about fundamental approach based on patterns found:
Example questions (adapt based on actual findings):
services/ and [Pattern B] in handlers/. Which pattern should this feature follow?"Progress: "Round 2/3: Edge cases and tradeoffs..."
Based on Round 1 answers and research, ask about:
Example questions:
Progress: "Round 3/3: Testing and scope (final round)..."
Always ask about testing and scope, even if user seems ready to proceed:
Testing (must ask one):
Scope (must ask one):
Example combined question:
Do not skip Round 3. These questions take 30 seconds and prevent spec gaps.
Progress: Tell the user "Writing implementation spec..."
Use the Task tool to invoke the spec-writer subagent.
Provide the prompt in this exact format:
## Research Findings
<paste structured findings from Step 4>
## Interview Q&A
### Core Decisions
**Q**: <question from Round 1>
**A**: <user's answer>
### Edge Cases & Tradeoffs
**Q**: <question from Round 2>
**A**: <user's answer>
### Scope & Boundaries
**Q**: <question from Round 3>
**A**: <user's answer>
## Spec Metadata
- **Topic**: <topic name>
- **Issue**: <issue ID or N/A>
- **Output Path**: <git-root>/<specsLocation>/<filename>.md
- **Date**: <YYYY-MM-DD>
The subagent will write the spec file directly to the Output Path.
Naming convention: <issue-id>-<topic>.md or <topic>.md
After the spec-writer subagent returns, verify the spec is complete.
Key sections to check (lenient - only these 4):
## Overview## Decisions## Implementation Steps## Edge CasesVerification steps:
Read the spec file at <git-root>/<specsLocation>/<filename>.md
If file missing or empty:
If file exists, check for key sections:
If all 4 sections present:
If 1-3 sections missing (partial write):
If all sections missing but has content:
On subagent failure (timeout, error):
Add a reference to the spec in <git-root>/.bonfire/index.md under Current State.
Read the generated spec and present a summary. Ask if user wants to:
Good questions are:
Bad questions:
Examples of good informed questions:
UserService uses repository pattern but OrderService uses direct DB access. Which approach?"auth middleware validates JWT but doesn't check permissions. Should this feature add permission checks or assume auth is enough?"BaseController with shared logic. Extend it or keep this feature standalone?"If the generated spec has inconsistent formatting, configure a PostToolUse hook to auto-format files after writes. See PostToolUse Formatter Hook.
Specs are temporary artifacts - they exist to guide implementation:
When a spec is fully implemented:
docs/