Use when facing 2+ independent tasks that can be worked on without shared state or sequential dependencies
Dispatches parallel agents to handle multiple independent tasks simultaneously for faster problem resolution.
npx claudepluginhub harmaalbers/claude-requirements-frameworkThis skill inherits all available tools. When active, it can use any tool Claude has access to.
When you have multiple unrelated failures (different test files, different subsystems, different bugs), investigating them sequentially wastes time. Each investigation is independent and can happen in parallel.
Core principle: Dispatch one agent per independent problem domain. Let them work concurrently.
digraph when_to_use {
"Multiple failures?" [shape=diamond];
"Are they independent?" [shape=diamond];
"Single agent investigates all" [shape=box];
"One agent per problem domain" [shape=box];
"Can they work in parallel?" [shape=diamond];
"Sequential agents" [shape=box];
"Parallel dispatch" [shape=box];
"Multiple failures?" -> "Are they independent?" [label="yes"];
"Are they independent?" -> "Single agent investigates all" [label="no - related"];
"Are they independent?" -> "Can they work in parallel?" [label="yes"];
"Can they work in parallel?" -> "Parallel dispatch" [label="yes"];
"Can they work in parallel?" -> "Sequential agents" [label="no - shared state"];
}
Use when:
Don't use when:
Group failures by what's broken:
Each domain is independent — fixing authentication doesn't affect API endpoint tests.
Each agent gets:
# Using Claude Code's Task tool
Task("Fix test_auth.py failures") # Agent 1
Task("Fix test_validation.py failures") # Agent 2
Task("Fix test_api.py failures") # Agent 3
# All three run concurrently
Alternative: For review tasks, consider using Agent Teams (TeamCreate) which provide built-in cross-validation and shared task lists.
When agents return:
Good agent prompts are:
Fix the 3 failing tests in tests/test_auth.py:
1. "test_login_with_expired_token" - expects 401, gets 200
2. "test_refresh_token_rotation" - token not rotated after use
3. "test_concurrent_sessions" - expects 3 sessions, gets 0
These may be timing or state issues. Your task:
1. Read the test file and understand what each test verifies
2. Identify root cause - timing issues or actual bugs?
3. Fix by addressing root cause (not just increasing timeouts)
Do NOT change tests in other files.
Return: Summary of what you found and what you fixed.
Too broad: "Fix all the tests" — agent gets lost Specific: "Fix tests/test_auth.py" — focused scope
No context: "Fix the race condition" — agent doesn't know where Context: Paste the error messages and test names
No constraints: Agent might refactor everything Constraints: "Do NOT change production code" or "Fix tests only"
Vague output: "Fix it" — you don't know what changed Specific: "Return summary of root cause and changes"
Related failures: Fixing one might fix others — investigate together first Need full context: Understanding requires seeing entire system Exploratory debugging: You don't know what's broken yet Shared state: Agents would interfere (editing same files, using same resources)
After agents return:
Activates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
Search, retrieve, and install Agent Skills from the prompts.chat registry using MCP tools. Use when the user asks to find skills, browse skill catalogs, install a skill for Claude, or extend Claude's capabilities with reusable AI agent components.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.