From nexus
Explore implementation strategies for business requirements. Interactive brainstorming that presents multiple approaches, trade-offs, and creates high-level implementation picture before committing to detailed specs.
npx claudepluginhub nexus-a1/claude-skills --plugin nexusThis skill is limited to using the following tools:
Transform brief business requirements into clear implementation strategy through interactive exploration.
Mandates invoking relevant skills via tools before any response in coding sessions. Covers access, priorities, and adaptations for Claude Code, Copilot CLI, Gemini CLI.
Share bugs, ideas, or general feedback.
Transform brief business requirements into clear implementation strategy through interactive exploration.
This skill sits in the early thinking phase - after you get business requirements but before you commit to detailed technical specs. It helps you:
Use this when: You have a business request and want to think through implementation options.
Don't use this when: You already know the approach and need detailed specs (use /create-requirements instead).
Current directory: !pwd
Git status: !git status --short 2>/dev/null || echo "Not a git repository"
Arguments: $ARGUMENTS
Read .claude/configuration.yml for project-specific paths. If the file doesn't exist or a key is missing, use defaults:
| Config Key | Default | Purpose |
|---|---|---|
storage.artifacts.work | location: local, subdir: work | Work sessions |
# Source resolve-config: marketplace installs get ${CLAUDE_PLUGIN_ROOT} substituted
# inline before bash runs; ./install.sh users fall back to ~/.claude. If neither
# path resolves, fail loudly rather than letting resolve_artifact be undefined.
if [ -f "${CLAUDE_PLUGIN_ROOT}/shared/resolve-config.sh" ]; then
source "${CLAUDE_PLUGIN_ROOT}/shared/resolve-config.sh"
elif [ -f "$HOME/.claude/shared/resolve-config.sh" ]; then
source "$HOME/.claude/shared/resolve-config.sh"
else
echo "ERROR: resolve-config.sh not found. Install via marketplace or run ./install.sh" >&2
exit 1
fi
WORK_DIR=$(resolve_artifact work work)
Use $WORK_DIR instead of hardcoded .claude/work throughout this workflow.
Important: All path references in this skill MUST use $WORK_DIR. Never use hardcoded .claude/work/ paths.
If $ARGUMENTS begins with promote, handle the promote flow instead of normal brainstorm.
Syntax: /brainstorm promote {slug} [{ticket-id}]
Behavior:
Parse $ARGUMENTS: extract {slug} (word after "promote") and optional {ticket-id} (next word).
Verify $WORK_DIR/{slug}/state.json exists. If not found, output error and stop:
Error: Brainstorm session not found: {slug}
Available sessions:
{list from manifest.json or $WORK_DIR/ dirs}
Read state.json. Warn if "status" is already "promoted":
Warning: This brainstorm is already promoted to {promoted_to}.
Re-promote? [y/n]
Use AskUserQuestion. On n: stop.
If {ticket-id} was not provided, ask:
AskUserQuestion: Enter the ticket/identifier for this work (e.g., PROJ-123 or leave blank to use the brainstorm slug as draft):
If blank, use {slug} as the identifier.
Update brainstorm state ($WORK_DIR/{slug}/state.json):
{
"status": "promoted",
"promoted_to": "{ticket-id}",
"updated_at": "{ISO_TIMESTAMP}"
}
Merge these fields into the existing JSON (preserve all other fields).
Update manifest ($WORK_DIR/manifest.json) — find the entry where identifier == {slug}, update status to "promoted" and add promoted_to.
Announce:
Brainstorm '{slug}' promoted → {ticket-id}
Launching /create-requirements with brainstorm context pre-loaded...
Continue directly into Stage 1 of the create-requirements workflow with:
--from-brainstorm {slug} flag effectively active{identifier} pre-set to {ticket-id} (skip the identifier prompt in Stage 1.1)To achieve this, output the following instruction and stop — do not run the full brainstorm phases:
Run: /create-requirements --from-brainstorm {slug} {ticket-id}
Then stop. The user will run this, or you may invoke the create-requirements workflow inline if the tool allows it.
If $ARGUMENTS begins with --light, strip the flag and enable lightweight mode:
This reduces cost for exploratory brainstorming where deep reasoning is less critical than in requirements or implementation.
Before starting, check whether an active brainstorm session already exists for this topic.
# Check manifest for active brainstorm sessions
if [[ -f "${WORK_DIR}/manifest.json" ]]; then
jq -r '.items[] | select(.type == "brainstorm" and .status != "completed") | "\(.identifier)\t\(.title)\t\(.current_phase)\t\(.updated_at)"' "${WORK_DIR}/manifest.json"
fi
If active sessions exist AND an argument was provided:
Check whether any session identifier or title fuzzy-matches $ARGUMENTS. If a match is found, present it:
Found active brainstorm: {title} ({identifier})
Status: {current_phase} — last updated {updated_at}
Resume this session? [y] Yes [n] No, start fresh [s] Show status
Use AskUserQuestion. On yes: load state from $WORK_DIR/{identifier}/state.json and resume from the last incomplete phase. On show status: display phase completion table, then ask again. On no: continue to Phase 1.
If no argument and active sessions exist: Skip this check — Phase 1 will ask for the feature description and can detect duplicates at that point.
If no active sessions: Proceed directly to Phase 1.
From $ARGUMENTS:
What feature or change do you want to brainstorm?
Provide a brief description (1-3 sentences):
- What problem are you solving?
- What does the business want?
- Any key constraints?
Examples:
- "Users need to export their data to Excel"
- "Integrate SSO with Azure AD for authentication"
- "Add webhook notifications when orders complete"
Store as {feature_description}.
Use AskUserQuestion to understand context:
Questions:
1. What's the business driver?
- New customer requirement
- Compliance/regulatory need
- Performance issue
- User experience improvement
- Technical debt reduction
2. What's the urgency?
- Critical (blocking customers)
- High (planned for next sprint)
- Medium (on roadmap)
- Low (nice to have)
3. Any known constraints?
- Must use specific technology
- Budget limitations
- Timeline restrictions
- Integration requirements
mkdir -p $WORK_DIR/{slug}/context
Where {slug} is kebab-case version of feature (e.g., "user-data-export").
Initialize state file $WORK_DIR/{slug}/state.json:
{
"schema_version": 1,
"identifier": "{slug}",
"type": "brainstorm",
"title": "{feature_description_summary}",
"status": "in_progress",
"created_at": "{ISO_TIMESTAMP}",
"updated_at": "{ISO_TIMESTAMP}",
"selected_approach": null,
"phases": {
"exploration": {"status": "pending"},
"approaches": {"status": "pending"},
"refinement": {"status": "pending"},
"quality_guard": {"status": "pending"},
"work_breakdown": {"status": "pending"}
},
"outputs": {
"exploration": "context/exploration.md",
"business_context": "context/business-context.md",
"approaches": "context/approaches.md",
"architecture_validation": "context/architecture-validation.md",
"implementation_picture": "implementation-picture.md",
"work_breakdown": "work-breakdown.md",
"summary": "brainstorm-summary.md"
},
"updates": []
}
Register active session for the optional auto-context.sh PostToolUse hook (no-op when CLAUDE_SESSION_ID is unset):
if [ -n "${CLAUDE_SESSION_ID:-}" ] && command -v jq >/dev/null 2>&1; then
mkdir -p "$WORK_DIR"
touch "$WORK_DIR/.active-sessions.lock"
(
flock -x -w 2 200 || exit 0
[ -s "$WORK_DIR/.active-sessions" ] || echo '{}' > "$WORK_DIR/.active-sessions"
jq --arg s "$CLAUDE_SESSION_ID" --arg w "{slug}" \
'. + {($s): $w}' "$WORK_DIR/.active-sessions" \
> "$WORK_DIR/.active-sessions.tmp.$$" \
&& mv "$WORK_DIR/.active-sessions.tmp.$$" "$WORK_DIR/.active-sessions" \
|| rm -f "$WORK_DIR/.active-sessions.tmp.$$"
) 200>"$WORK_DIR/.active-sessions.lock"
fi
Goal: Understand what exists and what's needed.
Use Task tool with subagent_type: "Explore". Read references/agent-prompts.md (Phase 2.1 section) for the prompt template.
Save output to $WORK_DIR/{slug}/context/exploration.md. Update state: phases.exploration = completed.
Use Task tool with subagent_type: "business-analyst". Read references/agent-prompts.md (Phase 2.2 section) for the prompt template.
Save output to $WORK_DIR/{slug}/context/business-context.md. Update state: phases.exploration = completed (covers both exploration agents).
Before launching parallel agents, define non-overlapping scopes. Each agent should own one domain of knowledge with no shared territory. Split by system/component boundary, not by feature keyword. Example:
Do NOT include supporting context from one agent's domain in the other's prompt.
Run both agents in parallel.
Goal: Present 2-3 different ways to implement this feature.
Use Task tool with subagent_type: "Plan". Read references/agent-prompts.md (Phase 3.1 section) for the prompt template, including the architectural-distinction and trade-off rules.
Save output to $WORK_DIR/{slug}/context/approaches.md. Update state: phases.approaches = in_progress.
Run in PARALLEL with 3.1 — the architect works from exploration context, not from Plan's approaches.
Use Task tool with subagent_type: "architect":
Prompt: Analyze the project's architectural constraints and patterns relevant to this feature.
Feature: {feature_description}
Codebase patterns: {from exploration.md}
Assess:
1. Architecture style in use (layered, hexagonal, modular, MVC) and its constraints
2. Established patterns that any implementation MUST follow
3. Integration points and their architectural boundaries
4. Known technical debt or fragile areas to avoid
5. Scalability constraints relevant to this feature
Provide:
- A list of architectural constraints any approach must satisfy
- Patterns that must be followed (with file path examples)
- Risk areas to avoid
- A feasibility checklist for evaluating approaches
Save output to $WORK_DIR/{slug}/context/architecture-validation.md.
IMPORTANT: Wait for both 3.1 (Plan agent) and 3.1b (architect) to complete before proceeding. After both complete: Annotate each approach from 3.1 with architect constraints from 3.1b. Flag any approach that violates identified constraints. Add feasibility rating: Recommended / Feasible / Risky / Not Recommended.
Display the approaches using the format in references/display-templates.md (Phase 3.2 section).
Use AskUserQuestion:
Which approach interests you most?
1. {Approach 1 name}
2. {Approach 2 name}
3. {Approach 3 name}
4. Combination of approaches
5. None - need different options
Or provide specific feedback on what you like/dislike.
Update state with selected approach: "selected_approach": "{approach_name}", "phases.approaches": "completed".
Goal: Refine the chosen approach based on feedback.
Based on user selection, use Task tool with subagent_type: "Plan". Read references/agent-prompts.md (Phase 4.1 section) for the refinement prompt covering component breakdown, data flow, database changes, API design, security, and testing strategy.
Save to $WORK_DIR/{slug}/implementation-picture.md. Update state: phases.refinement = completed.
Use Task tool with subagent_type: "architect". Read references/agent-prompts.md (Phase 4.2 section) for the architecture-validation prompt.
Run architect AFTER Plan refinement completes — architect needs the refined implementation picture from 4.1 to validate effectively.
Show the detailed implementation picture using the format in references/display-templates.md (Phase 4.3 section).
Use AskUserQuestion:
Is this implementation picture clear?
1. Yes, I understand the approach
2. Need more detail on specific area (tell me which)
3. Want to explore a different approach
4. Ready to outline work items
If user wants more detail, repeat refinement on specific areas.
Goal: Independently challenge the implementation picture before committing to the work breakdown.
Use Task tool with subagent_type: "quality-guard". Read references/agent-prompts.md (Phase 4.5 section) for the challenge prompt, which lists the context files to read and the 6 review questions. The verdict is APPROVED / CONDITIONAL / REJECTED.
Process the verdict:
phases.quality_guard = completed/approved. Proceed to Phase 5.phases.quality_guard = completed/conditional. Proceed to Phase 5.Save quality-guard output to $WORK_DIR/{slug}/context/quality-guard.md. Update state: phases.quality_guard = completed.
Goal: Outline tickets/tasks needed.
Based on the implementation picture, create logical work items:
## Work Items
### 1. Database Schema
**Type:** Database
**Description:** Create migrations for new entities
**Files affected:**
- migrations/Version{timestamp}.php
- Entity/{Entity1}.php
- Entity/{Entity2}.php
**Dependencies:** None
**Estimate:** Small (< 1 day)
---
### 2. Service Layer
**Type:** Backend
**Description:** Implement core business logic
**Files affected:**
- Service/{Feature}/{ServiceName}.php
- Tests/Service/{Feature}/{ServiceName}Test.php
**Dependencies:** #1 (Database Schema)
**Estimate:** Medium (1-2 days)
---
### 3. API Endpoints
**Type:** Backend
**Description:** Create REST endpoints
**Files affected:**
- Controller/{Feature}/{HTTPMethod}Controller.php
- Model/{Feature}/{HTTPMethod}Request.php
- Model/{Feature}/{HTTPMethod}Response.php
**Dependencies:** #2 (Service Layer)
**Estimate:** Medium (1-2 days)
---
{Additional work items...}
Save to $WORK_DIR/{slug}/work-breakdown.md. Update state: phases.work_breakdown = completed.
Generate ASCII diagram showing relationships:
Work Item Flow:
[1] Database Schema
↓
[2] Service Layer
↓
[3] API Endpoints
↓
[4] Frontend (if applicable)
Parallel work:
- [5] External API Integration (independent)
- [6] Documentation (can start anytime)
Estimated total: {X} days/weeks
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Work Breakdown: {feature}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Total Items: {N}
Estimated Effort: {X} days/weeks
{work_items_summary}
{visual_flow_diagram}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Files Created:
$WORK_DIR/{slug}/
├── state.json
├── context/
│ ├── exploration.md
│ ├── business-context.md
│ ├── approaches.md
│ ├── architecture-validation.md
│ └── quality-guard.md
├── implementation-picture.md
├── work-breakdown.md
└── brainstorm-summary.md
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Next Steps:
1. Review the work breakdown
2. Create detailed requirements: /create-requirements --from-brainstorm {slug}
3. Or break into epic: /epic "{feature}"
4. Or start implementing first item directly
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
After saving all brainstorm outputs, update the brainstorms manifest.
Read or initialize ${WORK_DIR}/manifest.json (see docs/manifest-system.md):
MANIFEST="${WORK_DIR}/manifest.json"
# Initialize if missing
if [[ ! -f "$MANIFEST" ]]; then
# Create empty manifest with artifact_type: "work"
fi
Upsert item using identifier (the slug) as unique key:
{
"identifier": "{slug}",
"title": "{feature_description_summary}",
"type": "brainstorm",
"status": "completed",
"created_at": "{ISO_TIMESTAMP}",
"updated_at": "{ISO_TIMESTAMP}",
"current_phase": "completed",
"progress": "Brainstorm complete",
"branch": null,
"tags": [],
"path": "{slug}/"
}
Update last_updated and total_items in the envelope.
Write a comprehensive summary document:
$WORK_DIR/{slug}/brainstorm-summary.md:
# Brainstorm Summary: {feature}
**Date:** {timestamp}
**Status:** Completed
## Business Context
{summary_from_phase_2}
## Approaches Considered
### Approach 1: {name}
{brief_description}
**Outcome:** {Selected | Rejected - why}
### Approach 2: {name}
{brief_description}
**Outcome:** {Selected | Rejected - why}
## Selected Approach: {name}
### Why This Approach?
{rationale}
### Implementation Picture
**Components:**
{list}
**Data Flow:**
{steps}
**Database:**
{changes}
**APIs:**
{endpoints}
### Work Breakdown
{work_items_summary}
**Total Effort:** {estimate}
## Risks & Considerations
- Risk 1: {description}
- Mitigation: {how to address}
- Risk 2: {description}
- Mitigation: {how to address}
## Next Steps
1. {action 1}
2. {action 2}
3. {action 3}
## Decision Log
- **{Date}:** Selected {approach_name} because {reason}
- **{Date}:** Decided to {decision} based on {rationale}
## References
- Codebase examples: {file_paths}
- Related features: {links}
- External docs: {urls if any}
Update state: "status": "completed", "updated_at": "{ISO_TIMESTAMP}".
/create-requirements/epic for large efforts# Clear auto-context sentinel on completion
if [ -n "${CLAUDE_SESSION_ID:-}" ] \
&& [ -f "$WORK_DIR/.active-sessions" ] \
&& command -v jq >/dev/null 2>&1; then
(
flock -x -w 2 200 || exit 0
jq --arg s "$CLAUDE_SESSION_ID" 'del(.[$s])' "$WORK_DIR/.active-sessions" \
> "$WORK_DIR/.active-sessions.tmp.$$" \
&& mv "$WORK_DIR/.active-sessions.tmp.$$" "$WORK_DIR/.active-sessions" \
|| rm -f "$WORK_DIR/.active-sessions.tmp.$$"
) 200>"$WORK_DIR/.active-sessions.lock"
fi
Read references/error-handling.md for error-scenario message templates (no feature description, feature too vague, all approaches rejected).
$WORK_DIR/ for reference onlyBusiness Request
↓
/brainstorm ← [You are here]
↓
Decision: What next?
↓
├─→ /create-requirements (single feature)
├─→ /epic (large initiative, multiple tickets)
├─→ /create-proposal (formal proposal needed)
└─→ Direct implementation (simple, well-understood)