From develop
Analysis-only planning — classify and scope a task without writing code; outputs a structured plan to .plans/active/.
npx claudepluginhub borda/ai-rig --plugin developThis skill is limited to using the following tools:
<objective>
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Analysis-only mode that produces a structured plan without writing any code. Use this to understand scope, risks, and effort before committing to a full /develop:feature, /develop:fix, or /develop:refactor.
NOT for: writing code or tests (use the appropriate develop mode); .claude/ config changes (use /manage).
Foundry plugin check: run
ls ~/.claude/plugins/cache/ 2>/dev/null | grep -q foundry(exit 0 = installed). If the check fails or you are uncertain, proceed as if foundry is available — it is the common case; only fall back if an agent dispatch explicitly fails.
When foundry is not installed, substitute foundry:X references with general-purpose and prepend the role description plus model: <model> to the spawn call:
| foundry agent | Fallback | Model | Role description prefix |
|---|---|---|---|
foundry:sw-engineer | general-purpose | opus | You are a senior Python software engineer. Write production-quality, type-safe code following SOLID principles. |
foundry:qa-specialist | general-purpose | opus | You are a QA specialist. Write deterministic, parametrized pytest tests covering edge cases and regressions. |
foundry:linting-expert | general-purpose | haiku | You are a static analysis specialist. Fix ruff/mypy violations, add missing type annotations, configure pre-commit hooks. |
Skills with --team mode: team spawning with fallback agents still works but produces lower-quality output.
Task hygiene: Before creating tasks, call TaskList. For each found task:
completed if the work is clearly donedeleted if orphaned / no longer relevantin_progress only if genuinely continuingTask tracking: immediately after Step 1 (scope is known), create TaskCreate entries for all steps of this workflow before doing any other work. Mark each step in_progress when starting it, completed when done.
Determine the task type and affected surface.
Structural context (codemap, if installed) — soft PATH check, silently skip if scan-query not found:
PROJ=$(basename "$(git rev-parse --show-toplevel 2>/dev/null)" 2>/dev/null) || PROJ=$(basename "$PWD")
if command -v scan-query >/dev/null 2>&1 && [ -f ".cache/scan/${PROJ}.json" ]; then
scan-query central --top 5
fi
If results are returned: prepend a ## Structural Context (codemap) block to the foundry:sw-engineer spawn prompt with the hotspot JSON. This gives the agent an immediate complexity picture for sizing effort and identifying risky modules before exploring the codebase. If scan-query is not found or index is missing: proceed silently — do not mention codemap to the user.
Spawn a foundry:sw-engineer agent with the full goal text from $ARGUMENTS. The agent should:
feature, fix, or refactorThe agent returns its findings inline (no file handoff needed — output is short).
Derive a filename slug from the goal: take the first 4-5 meaningful words, lowercase, hyphen-separated (e.g. "improve caching in data loader" -> plan_improve-caching-data-loader.md). Write the plan to .plans/active/<slug> (create or overwrite). Store the full path as PLAN_FILE — used in Steps 3 and Final output.
# Plan: <goal>
## Brief
*[Generated after agent review — see below]*
---
## Full Plan
**Classification**: feature | fix | refactor
**Complexity**: small | medium | large
**Date**: <YYYY-MM-DD>
### Goal
<One-paragraph restatement of the goal in concrete terms — what changes, what doesn't.>
### Affected files
- `path/to/file.py` — reason
- `path/to/other.py` — reason
### Risks
- <risk 1>
- <risk 2>
### Suggested approach
1. <Step 1>
2. <Step 2>
3. <Step 3>
...
### Follow-up command
/develop <classification> <original goal text>
Spawn the execution agents for this classification in parallel. Each reads <PLAN_FILE> and returns only a compact JSON — no prose, no analysis:
Each agent receives only the plan file path and their role — no conversation history, no unrelated context. Prompt (substitute <ROLE> and <PLAN_FILE>):
"Read
<PLAN_FILE>. Review the plan from your perspective as<ROLE>. Flag any domain-specific concerns, risks, or blockers you see. Can you execute your part autonomously without further user input? Return only:{\"a\":\"<ROLE>\",\"ok\":true|false,\"blockers\":[\"...\"],\"q\":[\"...\"],\"concerns\":[\"...\"]}"
Agents return inline (verdicts are ~150 bytes — no file handoff needed). Collect all results:
ok: true, empty blockers, q, and concerns -> note ✓ agents ready in final output and proceedok: false, non-empty blockers or q -> enter the internal resolution loop below before surfacing anything to the userconcerns with ok: true -> surface as advisory notes in the final output (not blockers, but domain-specific flags the user should know before starting)For each blocker or open question:
<PLAN_FILE> and mark the item resolved.{"a":"<ROLE>","resolved":"<item>","answer":"<resolution>"}. If the agent returns ok: true -> resolved; remove from the blockers list.ok: true -> ✓ agents ready.Escalate to user only what cannot be resolved autonomously — a blocker requires user input when: it depends on a business decision, an undocumented external constraint, a missing credential/secret, or a genuine ambiguity in the goal that has two equally valid interpretations.
For each escalated item, present:
Do not escalate: items resolvable by reading the codebase, items that are risks (not blockers), or items already addressed in the plan.
Compose the brief — this is the compact human-readable summary of the plan after all agent input has been incorporated:
<One-sentence summary of what the plan achieves and the main approach.>
Classification : <feature|fix|refactor>
Complexity : <small|medium|large>
Affected files : N files across M modules
Key risks : <one-liner or "none">
Agent review : ✓ agents ready (<N> corrections incorporated) | ⚠ see below
<Steps table — use the format that best fits the complexity:>
- Simple: | # | Step |
- Staged/large: | # | Stage | What changes | Stop condition |
- Fix: | # | Action | Target | Verification |
Advisory notes from agents (omit table if none):
| Agent | Note |
|-------|------|
| <role> | <concern> |
Co-review corrections applied (<N> agents, omit table if none):
| Agent | Location | Change |
|-------|----------|--------|
| <agent> | <file or step> | <what changed> |
Write the brief into <PLAN_FILE>: replace the *[Generated after agent review — see below]* placeholder in ## Brief with the composed brief content. The file now contains both the brief and the full plan.
Print to terminal:
Plan -> <PLAN_FILE>
<brief content exactly as written to the file>
-> /develop <classification> <goal> when ready
If unresolved items were escalated, print each after the brief:
⚠ Issue: <one sentence>
Alternatives: (a) ... (b) ... (c) ...
Recommendation: <option> — <reason>
Wait for user input before printing -> /develop ....
No quality stack, no Codex pre-pass, no review loop. Exit after printing the summary.