Create agent-team orchestrated prompt bundles (orchestrator + sub-prompts) and store them through prompt-manager.
Creates orchestrated prompt sets with a main coordinator and specialized sub-prompts for complex multi-step tasks.
/plugin marketplace add cruzanstx/daplug/plugin install daplug@cruzanstxThis skill is limited to using the following tools:
Create an orchestrated prompt set for complex work:
Use this skill when the user asks for /create-at-prompt.
/create-prompt)./prompts/ (never completed/)PLUGIN_ROOT=$(jq -r '.plugins."daplug@cruzanstx"[0].installPath' ~/.claude/plugins/installed_plugins.json)
PROMPT_MANAGER="$PLUGIN_ROOT/skills/prompt-manager/scripts/manager.py"
CONFIG_READER="$PLUGIN_ROOT/skills/config-reader/scripts/config.py"
Use prompt-manager for all prompt writes/reads. Do not create files manually.
Break the task into sub-prompts only when at least one of these is true:
Keep sub-prompts single-purpose and executable in isolation.
Create sub-prompts first so orchestrator delegation can reference exact prompt IDs.
For each sub-task:
python3 "$PROMPT_MANAGER" create "<subtask-name>" --folder "$FOLDER" --content "$SUB_PROMPT_CONTENT" --json
Each sub-prompt must include:
<objective><scope><output> files<verification> criteriaAfter sub-prompts exist, create one orchestrator prompt that references them.
Use this template structure:
<objective>
Coordinate multi-agent execution for the parent task using existing sub-prompts.
</objective>
<orchestration>
<phase name="plan">
<!-- Claude native planning and risk check -->
- Confirm assumptions, dependencies, and execution strategy.
- Draft exact Task() orchestration with explicit escalation paths.
</phase>
<phase name="execute" strategy="parallel|sequential">
<!-- Delegation to sub-prompts via /run-prompt -->
<delegate prompt="228a" model="opencode" flags="--worktree" />
<delegate prompt="228b" model="codex" flags="--worktree --loop" />
</phase>
<phase name="validate">
<!-- Claude native integration + merge criteria -->
- Verify outputs are consistent and conflict-free.
- Resolve overlaps before final handoff.
</phase>
</orchestration>
<merge_criteria>
- All sub-prompts completed or explicitly triaged.
- No unresolved file conflicts.
- Validation checks pass.
</merge_criteria>
<output>
- Final integrated summary
- Recommended /run-at-prompt group syntax
</output>
The orchestrator body should include explicit Task() delegations so it is executable:
Task(
subagent_type: "at-monitor",
model: "haiku",
run_in_background: true,
prompt: "Launch /run-prompt 228a --model opencode --worktree and return Execution Report format."
)
Create orchestrator prompt:
python3 "$PROMPT_MANAGER" create "<task-name>-orchestrator" --folder "$FOLDER" --content "$ORCHESTRATOR_CONTENT" --json
After prompt creation, present:
/run-prompt <orchestrator-id> --model claude
/run-at-prompt "220,221 -> 222" --model codex --worktree
/run-at-prompt "220 221 222" --auto-deps --dry-run
Recommend --worktree for any parallel execution. Recommend --loop for high-risk prompts.
/run-prompt only; no ambiguous free-form delegation.<task>-orchestrator<task>-backend<task>-frontend<task>-tests<task>-docsActivates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.