From nexus
Decompose large initiatives into dependency-mapped, wave-sequenced tickets with per-ticket requirements. Use when a feature is too large for a single /create-requirements run — typically 5+ tickets with complex interdependencies.
npx claudepluginhub nexus-a1/claude-skills --plugin nexusThis skill is limited to using the following tools:
Break down large initiatives (epics) into sequential, implementable tickets with proper dependencies.
Mandates invoking relevant skills via tools before any response in coding sessions. Covers access, priorities, and adaptations for Claude Code, Copilot CLI, Gemini CLI.
Share bugs, ideas, or general feedback.
Break down large initiatives (epics) into sequential, implementable tickets with proper dependencies.
Current directory: !pwd
Git status: !git status --short 2>/dev/null || echo "Not a git repository"
Arguments: $ARGUMENTS
Read .claude/configuration.yml for project-specific paths. If the file doesn't exist or a key is missing, use defaults:
| Config Key | Default | Purpose |
|---|---|---|
storage.artifacts.work | location: local, subdir: work | Work state and context |
# Source resolve-config: marketplace installs get ${CLAUDE_PLUGIN_ROOT} substituted
# inline before bash runs; ./install.sh users fall back to ~/.claude. If neither
# path resolves, fail loudly rather than letting resolve_artifact be undefined.
if [ -f "${CLAUDE_PLUGIN_ROOT}/shared/resolve-config.sh" ]; then
source "${CLAUDE_PLUGIN_ROOT}/shared/resolve-config.sh"
elif [ -f "$HOME/.claude/shared/resolve-config.sh" ]; then
source "$HOME/.claude/shared/resolve-config.sh"
else
echo "ERROR: resolve-config.sh not found. Install via marketplace or run ./install.sh" >&2
exit 1
fi
WORK_DIR=$(resolve_artifact work work)
Use $WORK_DIR instead of hardcoded .claude/work throughout this workflow.
Important: All path references in this skill MUST use $WORK_DIR. Never use hardcoded .claude/work/ paths.
Transform a large initiative into a structured set of implementable tickets with dependencies.
IMPORTANT: Complete all steps using parallel tool calls where possible.
From $ARGUMENTS:
If empty or insufficient:
Error: Epic description required.
Usage:
/epic "Implement user authentication system"
/epic "Add payment processing with Stripe"
/epic "Migrate to microservices architecture"
Extract:
EPIC-42, PROJ-100) — ask via AskUserQuestion if not provided. Must match [A-Z]+-[0-9]+. Store as {epic-ticket}.[a-z0-9-]+. Confirm with the user before proceeding.{epic-id} = {epic-ticket}-{slug} (per the Work Directory Naming Convention in CLAUDE.md). This is the full composite identifier used throughout the rest of this skill for paths, manifest entries, and resume handles.Per-ticket naming: each generated ticket has {ticket-id} = {ticket-number}-{ticket-slug} where {ticket-number} is the Jira/issue identifier the user will assign (or a placeholder like {epic-ticket}-001 until real ticket IDs are created) and {ticket-slug} is a kebab-case descriptor derived from the ticket's title. The composed {ticket-id} is the value used in paths and references throughout this skill.
Read references/agent-prompts.md for the business-analyst and architect prompt templates. Fill in {description} and, for architect, {from business-analyst}.
Run both agents in parallel.
Goal: Based on Phase 2 findings, run specialist agents to gather deeper context for areas that will significantly impact the epic breakdown.
IMPORTANT: This phase is conditional. Only run agents when the scope warrants it. Skip entirely if the initiative is straightforward and doesn't touch databases, external APIs, cloud infrastructure, or security-sensitive areas.
Review the combined output from business-analyst and architect in Phase 2. Check for signals that warrant specialist agents:
| Signal in Phase 2 Findings | Agent | Purpose |
|---|---|---|
| New tables, schema modifications, migrations, entity changes | data-modeler | Analyze schema impacts, migration complexity, relationship constraints |
| Third-party API integrations, webhooks, external service calls | integration-analyst | Map API contracts, auth requirements, error handling patterns |
| New AWS/cloud resources, infrastructure changes, IaC modifications | aws-architect | Assess infrastructure needs, IAM requirements, cost implications |
| Authentication, authorization, PII handling, payments, compliance | security-requirements | Identify security constraints, compliance needs, audit requirements |
If no signals are detected: Skip to Phase 3 immediately.
If one or more signals are detected: Proceed to 2.5.2.
Execute all applicable agents in a single message with multiple Task tool calls. Only run the agents whose Phase 2 signals matched.
Read references/agent-prompts.md (Phase 2.5.2 section) for the full prompt templates for data-modeler, integration-analyst, aws-architect, and security-requirements. Fill in {description}, {from business-analyst}, and {from architect} in each applicable prompt.
Feed specialist agent outputs into the subsequent phases:
Based on analysis from Phase 2 and specialist findings from Phase 2.5 (if any), break the epic into tickets:
Ticket sizing rules:
Ticket structure:
{ticket-id}: {Title} # e.g., PROJ-101-db-schema
Description: {What this ticket accomplishes}
Components: {Files/areas affected}
Dependencies: {Which other tickets must complete first}
Estimate: Small | Medium | Large
Type: Database | Backend | Frontend | Infrastructure | Integration
Common decomposition patterns:
For full-stack features:
For migrations:
For integrations:
Analyze ticket dependencies:
Dependency Analysis (use each ticket's `{ticket-id}`; example uses
`PROJ-1NN` placeholders):
PROJ-101-db-schema: Database schema
Blocks: PROJ-102-entity-layer, PROJ-104-...
Blocked by: None
PROJ-102-entity-layer: Entity layer
Blocks: PROJ-105-..., PROJ-106-...
Blocked by: PROJ-101-db-schema
PROJ-103-external-api-client: External API client
Blocks: PROJ-105-...
Blocked by: None
(Can run in parallel with PROJ-101, PROJ-102)
...
Implementation Waves (use full {ticket-id} — same identifiers used in Blocks/Blocked-by above):
Wave 1 (parallel): PROJ-101-db-schema, PROJ-103-external-api-client
Wave 2: PROJ-102-entity-layer
Wave 3: PROJ-104-..., PROJ-105-...
Wave 4: PROJ-106-...
Identify opportunities for parallel work.
For each ticket, create a lightweight spec.md — Spec-Driven Development at the ticket level. The epic-level EPIC_PLAN.md plays the role of shared plan.md; each ticket emits only its product-facing spec. The full triad (plan + tasks) materializes when /implement picks up the ticket and derives them from the spec + EPIC_PLAN.md context.
Layer boundary — strict at ticket level too: spec.md contains NO file paths, class names, or library choices. HOW lives in the parent EPIC_PLAN.md and per-ticket plan.md (derived later by /implement).
Use the Task tool with context-builder agent to gather context for each ticket area (used to populate the ticket's context/ directory, not to put HOW into the spec).
For each ticket, generate:
# {ticket-id}: {Title} <!-- e.g., PROJ-101-db-schema -->
> **Layer: SPEC** — WHAT & WHY for this ticket. Part of epic `{epic-id}`.
## Summary
{One paragraph: what this ticket delivers and why it matters in the context of the epic.}
## User Stories
- **US-{ticket-number}.1**: As a {role}, I want {capability}, so that {outcome}.
- **US-{ticket-number}.2**: ...
## Acceptance Criteria
Stable IDs scoped to the ticket: `AC-{ticket-number}.{n}` (so two tickets in the same epic don't collide).
- **AC-{ticket-number}.1**
- Given {precondition}
- When {action}
- Then {observable outcome}
- **AC-{ticket-number}.2**
- Given ...
- When ...
- Then ...
## Security & Compliance Criteria
(If `security-requirements` ran in Phase 2.5, include AC-SEC-{ticket-number}.{n} entries here. Otherwise omit.)
## Scope
**In scope:**
- {specific behavior delivered by this ticket}
**Out of scope:**
- {explicit exclusions — typically deferred to other epic tickets, link by {ticket-id}}
## Dependencies
- Blocked by: {ticket-id} (must complete first)
- Blocks: {ticket-id} (others waiting on this)
## Estimate
{Small: <1 day | Medium: 1-2 days | Large: 2-3 days}
## Epic Context
- Epic: `{epic-id}` — {epic title}
- Wave: {N}
- Shared technical context: see `../EPIC_PLAN.md`
Save each to: $WORK_DIR/{epic-id}/{ticket-id}/spec.md
Note: No plan.md or tasks.md is generated here. /implement bootstraps both from this spec + parent EPIC_PLAN.md when the ticket is picked up.
Goal: Validate the ticket breakdown before saving the epic structure.
Use Task tool with subagent_type: "quality-guard":
Prompt: Challenge this epic breakdown for '{epic_description}'.
Epic plan context (from Phase 2-4):
- Business analyst findings: {business_analyst_output}
- Architect validation: {architect_output}
- Tickets generated: {ticket_list_summary}
Review:
1. Is every ticket independently testable and deliverable? Flag any that are too broad or too vague.
2. Are the dependencies complete and correct? Are there hidden dependencies not captured?
3. Are the wave assignments valid — can Wave 1 tickets actually start in parallel?
4. Are there missing tickets (e.g., infrastructure setup, migration rollback, documentation)?
5. Is the scope appropriate — are any tickets doing too much (should be split) or too little (should be merged)?
6. Does the epic cover the full feature, or are there gaps between tickets?
7. **Spec-layer hygiene** — does each ticket's `spec.md` contain HOW-leakage (file paths, class names, library choices)? Those belong in `EPIC_PLAN.md` or in the per-ticket `plan.md` derived later by `/implement`, never in the ticket spec.
8. **AC scoping** — do AC IDs use the `AC-{ticket-number}.{n}` format so they don't collide across tickets in the same epic?
9. **AC coverage** — does every user story have at least one Given/When/Then AC? AC must be observable/testable.
Return: APPROVED / CONDITIONAL (list specific issues) / REJECTED (fundamental restructuring needed).
Process the verdict:
Create directory structure:
mkdir -p $WORK_DIR/{epic-id}
For each ticket:
mkdir -p $WORK_DIR/{epic-id}/{ticket-id}
Save files:
$WORK_DIR/{epic-id}/EPIC_PLAN.md - Shared technical plan (epic-level HOW context for all tickets)$WORK_DIR/{epic-id}/state.json - Epic tracking$WORK_DIR/{epic-id}/{ticket-id}/spec.md - Each ticket's product spec (WHAT/WHY only)Register active session for the optional auto-context.sh PostToolUse hook (no-op when CLAUDE_SESSION_ID is unset):
if [ -n "${CLAUDE_SESSION_ID:-}" ] && command -v jq >/dev/null 2>&1; then
mkdir -p "$WORK_DIR"
touch "$WORK_DIR/.active-sessions.lock"
(
flock -x -w 2 200 || exit 0
[ -s "$WORK_DIR/.active-sessions" ] || echo '{}' > "$WORK_DIR/.active-sessions"
jq --arg s "$CLAUDE_SESSION_ID" --arg w "{epic-id}" \
'. + {($s): $w}' "$WORK_DIR/.active-sessions" \
> "$WORK_DIR/.active-sessions.tmp.$$" \
&& mv "$WORK_DIR/.active-sessions.tmp.$$" "$WORK_DIR/.active-sessions" \
|| rm -f "$WORK_DIR/.active-sessions.tmp.$$"
) 200>"$WORK_DIR/.active-sessions.lock"
fi
Read references/epic-plan-template.md for the full EPIC_PLAN.md markdown template (overview, business context, tickets, implementation waves, progress tracking).
Read references/state-schema.md for the complete state.json schema (schema_version, tickets, waves, progress).
After saving all epic files, upsert into ${WORK_DIR}/manifest.json (see docs/manifest-system.md).
Read or initialize manifest, then upsert item using identifier (the epic slug) as unique key:
{
"identifier": "{epic-id}",
"title": "{Epic Title}",
"type": "epic",
"status": "in_progress",
"created_at": "{ISO_TIMESTAMP}",
"updated_at": "{ISO_TIMESTAMP}",
"current_phase": "planning",
"progress": "0/{total_tickets} tickets",
"branch": null,
"tags": [],
"path": "{epic-id}/"
}
Update last_updated and total_items in the envelope.
Show the user a clear summary:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Epic Created: {epic-id}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
{Epic Title}
{Brief description}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Breakdown: {N} tickets
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Wave 1 (Start first):
✓ {ticket-id}: {Title} [{Type}, {Estimate}]
✓ {ticket-id}: {Title} [{Type}, {Estimate}] (parallel)
Wave 2 (After Wave 1):
→ {ticket-id}: {Title} [{Type}, {Estimate}]
→ {ticket-id}: {Title} [{Type}, {Estimate}]
Wave 3 (After Wave 2):
→ {ticket-id}: {Title} [{Type}, {Estimate}]
...
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Agents Used
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✓ business-analyst (initiative analysis)
✓ architect (technical feasibility)
{✓ data-modeler - if used}
{✓ integration-analyst - if used} [PARALLEL]
{✓ aws-architect - if used}
{✓ security-requirements - if used}
✓ context-builder (ticket context)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Files Created
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
$WORK_DIR/{epic-id}/
├── EPIC_PLAN.md # Shared HOW context for all tickets
├── state.json
├── {ticket-id}/ # e.g., PROJ-101-db-schema/
│ └── spec.md # Product spec (WHAT/WHY) — plan.md + tasks.md derived by /implement
├── {ticket-id}/ # e.g., PROJ-102-entity-layer/
│ └── spec.md
└── ...
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Next Steps
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
1. Review the epic plan:
cat $WORK_DIR/{epic-id}/EPIC_PLAN.md
2. Start implementing first ticket:
/implement {ticket-id} # e.g., /implement PROJ-101-db-schema
3. Track progress:
/resume-work {epic-id}
4. View ticket details:
cat $WORK_DIR/{epic-id}/{ticket-id}/spec.md
# Clear auto-context sentinel on completion
if [ -n "${CLAUDE_SESSION_ID:-}" ] \
&& [ -f "$WORK_DIR/.active-sessions" ] \
&& command -v jq >/dev/null 2>&1; then
(
flock -x -w 2 200 || exit 0
jq --arg s "$CLAUDE_SESSION_ID" 'del(.[$s])' "$WORK_DIR/.active-sessions" \
> "$WORK_DIR/.active-sessions.tmp.$$" \
&& mv "$WORK_DIR/.active-sessions.tmp.$$" "$WORK_DIR/.active-sessions" \
|| rm -f "$WORK_DIR/.active-sessions.tmp.$$"
) 200>"$WORK_DIR/.active-sessions.lock"
fi
Read references/error-handling.md for error-scenario message templates (no description provided, epic too small, epic already exists).
{epic-ticket}-{slug} (e.g., "PROJ-100-user-auth-system") — see Work Directory Naming Convention in CLAUDE.md{ticket-number}-{slug} (e.g., "PROJ-101-db-schema") or use placeholders like {epic-ticket}-001 until real ticket IDs are assigned/resume-workAfter epic creation:
/implement {ticket-id} (e.g., /implement PROJ-101-db-schema)state.json/resume-work {epic-id} shows epic status and suggests next ticket (e.g., /resume-work PROJ-100-user-auth-system)