Use when facing 2+ independent tasks that can be worked on without shared state or sequential dependencies
Dispatches multiple agents to investigate independent failures in parallel.
/plugin marketplace add brmatola/gremlins/plugin install brmatola-gremlins@brmatola/gremlinsThis skill inherits all available tools. When active, it can use any tool Claude has access to.
When you have multiple unrelated failures (different test files, different subsystems, different bugs), investigating them sequentially wastes time. Each investigation is independent and can happen in parallel.
Core principle: Dispatch one agent per independent problem domain. Let them work concurrently.
Use when:
Don't use when:
Group failures by what's broken:
Each domain is independent - fixing tool approval doesn't affect abort tests.
Each agent gets:
// In Claude Code / AI environment
Task("Fix agent-tool-abort.test.ts failures")
Task("Fix batch-completion-behavior.test.ts failures")
Task("Fix tool-approval-race-conditions.test.ts failures")
// All three run concurrently
When agents return:
Good agent prompts are:
Fix the 3 failing tests in src/agents/agent-tool-abort.test.ts:
1. "should abort tool with partial output capture" - expects 'interrupted at' in message
2. "should handle mixed completed and aborted tools" - fast tool aborted instead of completed
3. "should properly track pendingToolCount" - expects 3 results but gets 0
These are timing/race condition issues. Your task:
1. Read the test file and understand what each test verifies
2. Identify root cause - timing issues or actual bugs?
3. Fix by:
- Replacing arbitrary timeouts with event-based waiting
- Fixing bugs in abort implementation if found
- Adjusting test expectations if testing changed behavior
Do NOT just increase timeouts - find the real issue.
Return: Summary of what you found and what you fixed.
Too broad: "Fix all the tests" - agent gets lost Specific: "Fix agent-tool-abort.test.ts" - focused scope
No context: "Fix the race condition" - agent doesn't know where Context: Paste the error messages and test names
No constraints: Agent might refactor everything Constraints: "Do NOT change production code" or "Fix tests only"
Vague output: "Fix it" - you don't know what changed Specific: "Return summary of root cause and changes"
Related failures: Fixing one might fix others - investigate together first Need full context: Understanding requires seeing entire system Exploratory debugging: You don't know what's broken yet Shared state: Agents would interfere (editing same files, using same resources)
After agents return:
Activates when the user asks about Agent Skills, wants to find reusable AI capabilities, needs to install skills, or mentions skills for Claude. Use for discovering, retrieving, and installing skills.
Activates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.