Use when facing 2+ independent tasks that can be worked on without shared state or sequential dependencies
npx claudepluginhub harmaalbers/claude-requirements-framework --plugin requirements-frameworkThis skill uses the workspace's default tool permissions.
When you have multiple unrelated failures (different test files, different subsystems, different bugs), investigating them sequentially wastes time. Each investigation is independent and can happen in parallel.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
When you have multiple unrelated failures (different test files, different subsystems, different bugs), investigating them sequentially wastes time. Each investigation is independent and can happen in parallel.
Core principle: Dispatch one agent per independent problem domain. Let them work concurrently.
digraph when_to_use {
"Multiple failures?" [shape=diamond];
"Are they independent?" [shape=diamond];
"Single agent investigates all" [shape=box];
"One agent per problem domain" [shape=box];
"Can they work in parallel?" [shape=diamond];
"Sequential agents" [shape=box];
"Parallel dispatch" [shape=box];
"Multiple failures?" -> "Are they independent?" [label="yes"];
"Are they independent?" -> "Single agent investigates all" [label="no - related"];
"Are they independent?" -> "Can they work in parallel?" [label="yes"];
"Can they work in parallel?" -> "Parallel dispatch" [label="yes"];
"Can they work in parallel?" -> "Sequential agents" [label="no - shared state"];
}
Use when:
Don't use when:
Group failures by what's broken:
Each domain is independent — fixing authentication doesn't affect API endpoint tests.
Each agent gets:
# Using Claude Code's Task tool
Task("Fix test_auth.py failures") # Agent 1
Task("Fix test_validation.py failures") # Agent 2
Task("Fix test_api.py failures") # Agent 3
# All three run concurrently
Alternative: For review tasks, consider using Agent Teams (TeamCreate) which provide built-in cross-validation and shared task lists.
When agents return:
Good agent prompts are:
Fix the 3 failing tests in tests/test_auth.py:
1. "test_login_with_expired_token" - expects 401, gets 200
2. "test_refresh_token_rotation" - token not rotated after use
3. "test_concurrent_sessions" - expects 3 sessions, gets 0
These may be timing or state issues. Your task:
1. Read the test file and understand what each test verifies
2. Identify root cause - timing issues or actual bugs?
3. Fix by addressing root cause (not just increasing timeouts)
Do NOT change tests in other files.
Return: Summary of what you found and what you fixed.
Too broad: "Fix all the tests" — agent gets lost Specific: "Fix tests/test_auth.py" — focused scope
No context: "Fix the race condition" — agent doesn't know where Context: Paste the error messages and test names
No constraints: Agent might refactor everything Constraints: "Do NOT change production code" or "Fix tests only"
Vague output: "Fix it" — you don't know what changed Specific: "Return summary of root cause and changes"
Related failures: Fixing one might fix others — investigate together first Need full context: Understanding requires seeing entire system Exploratory debugging: You don't know what's broken yet Shared state: Agents would interfere (editing same files, using same resources)
After agents return: