From rkstack
Dispatch 2+ independent tasks to parallel subagents. Use when facing tasks that can be worked on without shared state or sequential dependencies. Coordinates results and handles conflicts.
npx claudepluginhub mrkhachaturov/ccode-personal-plugins --plugin rkstackThis skill is limited to using the following tools:
<!-- AUTO-GENERATED from SKILL.md.tmpl — do not edit directly -->
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Guides TDD-style skill creation: pressure scenarios as tests, baseline agent failures, write docs to enforce compliance, verify with RED-GREEN-REFACTOR.
# === RKstack Preamble (dispatching-parallel-agents) ===
# Read detection cache (written by session-start via rkstack detect)
if [ -f .rkstack/settings.json ]; then
cat .rkstack/settings.json
else
echo "WARNING: .rkstack/settings.json not found — detection cache missing"
fi
# Session-volatile checks (can change mid-session)
_BRANCH=$(git branch --show-current 2>/dev/null || echo "unknown")
_HAS_CLAUDE_MD=$([ -f CLAUDE.md ] && echo "yes" || echo "no")
echo "BRANCH: $_BRANCH"
echo "CLAUDE_MD: $_HAS_CLAUDE_MD"
Use the detection cache and preamble output to adapt your behavior:
detection.flowType (web or default). If web: check React/Vue/Svelte patterns, responsive design, component architecture. If default: CLI tools, MCP servers, backend scripts.just commands instead of raw shell.detection.stack for what's in the project and detection.stats for scale (files, code, complexity).detection.repoMode for solo vs collaborative.detection.services for Supabase and other service integrations.ALWAYS follow this structure for every AskUserQuestion call:
_BRANCH value from preamble — NOT any branch from conversation history or gitStatus), and the current plan/task. (1-2 sentences)RECOMMENDATION: Choose [X] because [one-line reason] — always prefer the complete option over shortcuts (see Completeness Principle). Include Completeness: X/10 for each option. Calibration: 10 = complete implementation (all edge cases, full coverage), 7 = covers happy path but skips some edges, 3 = shortcut that defers significant work.A) ... B) ... C) ... — when an option involves effort, show both scales: (human: ~X / CC: ~Y)Assume the user hasn't looked at this window in 20 minutes and doesn't have the code open. If you'd need to read the source to understand your own explanation, it's too complex.
AI makes completeness near-free. Always recommend the complete option over shortcuts — the delta is minutes with AI. A "lake" (100% coverage, all edge cases) is boilable; an "ocean" (full rewrite, multi-quarter migration) is not. Boil lakes, flag oceans.
Effort reference — always show both scales:
| Task type | Human team | CC + AI | Compression |
|---|---|---|---|
| Boilerplate | 2 days | 15 min | ~100x |
| Tests | 1 day | 15 min | ~50x |
| Feature | 1 week | 30 min | ~30x |
| Bug fix | 4 hours | 15 min | ~20x |
Include Completeness: X/10 for each option (10=all edge cases, 7=happy path, 3=shortcut).
When completing a skill workflow, report status using one of:
It is always OK to stop and say "this is too hard for me" or "I'm not confident in this result."
Bad work is worse than no work. You will not be penalized for escalating.
Escalation format:
STATUS: BLOCKED | NEEDS_CONTEXT
REASON: [1-2 sentences]
ATTEMPTED: [what you tried]
RECOMMENDATION: [what the user should do next]
Delegate independent tasks to focused subagents running concurrently. Each agent gets isolated context and a precise scope. You coordinate, they execute.
Core principle: One agent per independent problem domain. Complete context in, verified results out.
Multiple tasks? ──no──► Do them yourself sequentially
│yes
▼
Are they independent? ──no──► Single agent or sequential execution
│yes
▼
Would they touch the same files? ──yes──► Sequential execution
│no
▼
Parallel dispatch
Use when:
Do NOT use when:
Group work by problem domain. Each domain must be fully independent:
Each agent prompt must be self-contained:
[Clear task title]
[What to do — specific scope, specific files]
Context:
- [Error messages, test names, file paths]
- [Relevant background the agent needs]
Constraints:
- Only modify files in [scope]
- Do NOT change [out-of-scope areas]
Return: Summary of root cause and what you changed.
Good prompts are focused (one problem domain), self-contained (all needed context included), and specific about output (what the agent should return).
Send all Agent tool calls in the same message so they run concurrently:
Agent("Fix auth module test failures in src/auth/...")
Agent("Fix payment processing tests in src/payments/...")
Agent("Update API documentation for new endpoints")
All agents run in parallel. You wait for all to return.
When agents return:
git diff to see actual changes, not just agent claimsIf agents touched overlapping files (despite your precautions):
Conflicts mean your task decomposition was wrong. Next time, tighten the boundaries.
| Mistake | Fix |
|---|---|
| Too broad: "Fix all the tests" | Specific: "Fix auth tests in src/auth/login.test.ts" |
| No context: "Fix the race condition" | Context: paste error messages and test names |
| No constraints: agent refactors everything | Constraints: "Only modify files in src/auth/" |
| Vague output: "Fix it" | Specific: "Return summary of root cause and changes" |
| Trusting self-reports | Verify: run tests, read diffs yourself |
After merging all agent results, verify before claiming completion:
Use the verification-before-completion skill if available. No completion claims without fresh evidence.
After all results are collected and verified, report: