This skill should be used when the user asks to "implement this plan", "generate execution prompts", "super implement", "create handoff from plan", or wants to transform a large plan into self-contained execution-ready prompt artifacts.
From agentic-coding-toolsnpx claudepluginhub thesylvester/agentic-coding-tools --plugin read-transcriptThis skill uses the workspace's default tool permissions.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Executes pre-written implementation plans: critically reviews, follows bite-sized steps exactly, runs verifications, tracks progress with checkpoints, uses git worktrees, stops on blockers.
Transform a plan into execution-ready prompt artifacts — either a single self-contained handoff prompt or a chain of prompt files and an intelligent orchestration prompt to execute them in a controlled sequence. We are to ensure we got all relevant context and information from the plan and the conversation that produced it (if available).
Before generating prompts, analyze what work benefits from shared context:
Decision:
/super-agent instances, no sequencing needed)| Signal | Format |
|---|---|
| Single focused task, <50 lines actionable content | Single prompt |
| Touches <=3 files, one clear goal | Single prompt |
| Multiple distinct file groups or components | Chain |
| Discovery + implementation + verification pattern | Chain |
| Multiple sequential dependencies | Chain |
| Would exceed ~20 files in scope | Chain |
| Signal in Plan | Suggests Phase |
|---|---|
| "Read/understand/map" language | Discovery |
| Multiple distinct file groups | Separate implementation phases |
| "Wire up / connect / integrate" | Integration |
| Test commands, verification steps | Verification |
2-4 phases. Don't force a fixed structure.
Scan for phrases that introduce optionality. Resolve using context before asking the user:
| Pattern in Source | Resolution |
|---|---|
| "Optionally (but recommended)" | Include — recommended means do it |
| "optional... off by default" or "behind a flag" | Skip — low priority, not core |
| "If X, then Y" | Check if X is true in context |
| "Consider adding... for [benefit]" | Include if benefit aligns with stated goals |
| Detailed spec follows the "optional" mention | Include — effort was spent specifying it |
| Mentioned in Definition of Done | Include — it's a success criterion |
Only ask the user if genuinely unresolvable after checking the source. Document all scope decisions.
Before writing, extract for verbatim preservation:
These MUST appear verbatim in the relevant prompt, never summarized.
Compare detail levels across sections. If one area (e.g., backend) has significantly more detail than another (e.g., frontend), flag it:
"Note: [Area A] has detailed specs; [Area B] section is sparse. The generated prompts will reflect this imbalance."
This catches lopsided plans before they become lopsided prompts.
Every prompt — standalone or phase file — follows this structure. Phase files may be leaner (reference prior phases, don't repeat).
<!-- EXECUTION DIRECTIVE: This is a pre-validated implementation prompt. Execute immediately without entering plan mode or invoking /reflect. -->
## 1. Task
One paragraph: what the agent must do and why.
## 2. Context & Constraints
- Specifications, goals, success criteria
- Constraints, key assumptions, decisions already made
### Prerequisites (chain only)
- [ ] Phase [X] completed (if applicable)
### Shared Interfaces (multi-prompt only, CRITICAL)
[Exact interface definitions — copy, don't summarize]
## 3. Inputs & Resources
### Files to Create/Modify
- Absolute paths
### Files to Reference (Read-Only)
- Absolute paths
### Key Code Patterns
- Inline code snippets (VERBATIM from plan)
### Build & Test Commands
- Exact commands
## 4. Execution Guidelines
- Numbered implementation steps
- Style and code standards
### Edge Cases
- Boundary conditions and handling
## 5. Assumptions
| Assumption | Reasoning |
| ---------- | --------- |
## 6. Verification Plan
Launch verification sub-agents in parallel:
1. **Test Sub-Agent**: Run `[test command]`. All tests must pass.
2. **Behavioral Sub-Agent**: Prove it works end-to-end.
3. **Code Quality Sub-Agent**: Review for complexity, duplication, security.
If any fails, fix and re-run. Retry up to 3 times.
## 7. Definition of Done
- [ ] Implementation matches spec
- [ ] All verification steps passed
- [ ] Assumptions documented (if any)
- [ ] No debug code or comments left behind
Use sub-agents liberally for parallel work within each prompt.
Save to: .ai-reference/prompts/YYYYMMDD-HHMMSS-<task-description>.md
No title header — start with the execution directive.
Directory: .ai-reference/prompts/<task-name>/
00-orchestrate.md # Execution plan — which prompts run in parallel, which sequentially
01-<task-name>.md # First independent task
02-<task-name>.md # Second independent task
...
# [Task Name] — Parallel Execution
These tasks are independent and can run simultaneously via parallel `/super-agent` calls.
## Tasks
| Task | File | Goal | Done When |
| ---- | ----------------- | --------------------- | ---------------- |
| 1 | `01-task-a.md` | [Goal] | [Criteria] |
| 2 | `02-task-b.md` | [Goal] | [Criteria] |
## Shared Interfaces (if any)
[Exact interface definitions shared across tasks — copy-pasted identically in each prompt]
## Execution
Launch all tasks in parallel, piping each prompt file directly:
\`\`\`bash
PROMPT_FILE=.ai-reference/prompts/<task-name>/01-task-a.md \
.claude/skills/super-agent/scripts/super-agent &
PROMPT_FILE=.ai-reference/prompts/<task-name>/02-task-b.md \
.claude/skills/super-agent/scripts/super-agent &
wait
\`\`\`
After all complete, verify integration points between tasks.
When parallel prompts produce/consume shared interfaces:
Directory: .ai-reference/prompts/<task-name>-chain/
00-orchestrate.md # Entry point — hand this to the runner
01-<phase-name>.md # e.g., 01-discovery.md
02-<phase-name>.md # e.g., 02-implement-core.md
03-<phase-name>.md # e.g., 03-verify.md
The orchestration prompt is an executable prompt — it IS the project manager. When handed to a super-agent, it autonomously runs each phase, verifies completeness, synthesizes inter-phase context, and adapts. Template:
<!-- EXECUTION DIRECTIVE: This is a pre-validated orchestration prompt. Execute immediately without entering plan mode or invoking /reflect. You are the project manager for a phased implementation. -->
# [Task Name] — Orchestrator
You are the orchestrator for a [N]-phase implementation. Your job is to execute each phase sequentially via super-agent, verify completeness between phases, synthesize findings, and adapt the next phase's prompt if needed.
## What You're Building
[One paragraph: what and why]
## Phase Sequence
| Phase | File | Goal | Done When |
| ----- | ----------------- | --------------------- | ---------------- |
| 1 | `01-discovery.md` | Map code, create plan | Plan output |
| 2 | `02-implement.md` | Build it | Typecheck passes |
| 3 | `03-verify.md` | Prove correctness | All tests pass |
## Global Constraints
[Constraints preserved from source plan]
## Execution Protocol
For each phase (1 through N), execute this loop:
### Step 1: Run the phase
\`\`\`bash
PROMPT_FILE=.ai-reference/prompts/<task-name>-chain/{phase-file} \
.claude/skills/super-agent/scripts/super-agent
\`\`\`
If NOT the first phase, prepend inter-phase context via stdin:
\`\`\`bash
{ echo "{context from Step 2 of previous phase}"; echo; echo "---"; echo; \
cat .ai-reference/prompts/<task-name>-chain/{phase-file}; } \
| .claude/skills/super-agent/scripts/super-agent
\`\`\`
### Step 2: Verify and synthesize
After the phase agent completes, launch a **verification sub-agent** (Task tool, Explore type) to inspect code on disk:
1. **Check "Done When" criteria** from the phase table
2. **Identify deviations** from the phase prompt
3. **Synthesize context** for the next phase:
- What changed (files + semantic description)
- Deviations from plan (if any)
- Test results
- Anything the next phase needs to know
### Step 3: Gate decision
- **All checks pass** → Proceed with synthesized context prepended
- **Minor issues** → Fix directly, re-verify, proceed
- **Major issues** → Re-run phase with corrective preamble. Max 2 retries before escalating to user.
### Step 4: Repeat for next phase.
## Phase Files
\`\`\`
[file listing with one-line descriptions]
\`\`\`
## Final Report
After all phases complete, summarize: total files modified, key metrics, deviations, remaining debt.
After generating all phase files, analyze each phase for sub-agent parallelization. Spawn parallel sub-agents — one per phase — each reading the actual source files referenced in its phase prompt to understand real file-level dependencies. Each sub-agent is tasked with:
Integrate results into each phase prompt's Execution Guidelines (§4) as explicit sub-agent spawn instructions:
## Sub-Agent Strategy
### Blocking (run first)
- [Task that must complete before fan-out]
### Parallel (after blocker completes)
Spawn these sub-agents in parallel:
1. **[Agent name]**: "[Specific task scope and files]"
2. **[Agent name]**: "[Specific task scope and files]"
3. **[Agent name]**: "[Specific task scope and files]"
### Sequential (after parallel completes)
- [Integration or wiring work that depends on parallel results]
Phases with no parallelization opportunities (e.g., a small verification phase) get no sub-agent strategy — don't force it.
| Source Content | Target |
|---|---|
| Task description, context, "why" | Orchestrate overview + Phase 1 |
| Files to read/reference | Discovery or relevant implementation phase |
| Files to create/modify | Implementation phase(s) with explicit boundaries |
| Code snippets/skeletons | Relevant implementation phase — VERBATIM |
| Edge cases / exceptions | Relevant implementation phase |
| Sub-agent strategy | Spawn instructions in relevant phases |
| Test commands | Final phase or per-phase |
| Definition of done | Orchestrate "Done When" + final phase |
Split implementation phases by whichever minimizes cross-phase dependencies: by component, by operation, or by risk.
When phases produce/consume shared interfaces:
After generating, output:
Created [single prompt / parallel prompts / prompt chain] in `.ai-reference/prompts/<path>`:
[file listing]
## To Execute
[single prompt]: PROMPT_FILE=<path> .claude/skills/super-agent/scripts/super-agent
[parallel]: See 00-orchestrate.md — launch N instances simultaneously via PROMPT_FILE.
[chain]: PROMPT_FILE=.ai-reference/prompts/<task-name>-chain/00-orchestrate.md .claude/skills/super-agent/scripts/super-agent
(The orchestrator agent autonomously runs phases, verifies, and synthesizes context.)
Scope decisions: [any ambiguity resolutions]
Delegation strategies: [phases with sub-agent parallelization, if any]
Write directly to the new agent (use "you"). Do not mention the planning conversation. Include all domain knowledge, technical background, and design rationale.