From superhackers
Use when facing 2+ independent security tasks that can be worked on without shared state or sequential dependencies — parallel scans across targets, simultaneous testing of unrelated attack surfaces, or concurrent investigation of independent findings
npx claudepluginhub narlyseorg/superhackers --plugin superhackersThis skill uses the workspace's default tool permissions.
> This is a workflow coordination skill. It requires no external security tools — it orchestrates how the AI dispatches and manages parallel tasks.
Retrieves texts, DMs, one-time codes, and inspects threads in ECC workflows. Provides evidence of exact sources checked for verification before replies.
Delivers expertise for HS tariff classification, customs documentation, duty optimization, restricted party screening, and trade compliance across jurisdictions.
Process documents with Nutrient API: convert formats (PDF, DOCX, XLSX, images), OCR scans (100+ languages), extract text/tables, redact PII, sign, fill forms.
This is a workflow coordination skill. It requires no external security tools — it orchestrates how the AI dispatches and manages parallel tasks.
MANDATORY: All parallel dispatch commands MUST follow this protocol:
Agent dispatch with validation
# When dispatching agents, validate each dispatch succeeds
# Agent tool handles this internally, but verify responses
Output collection with timeout
# Set reasonable timeout for parallel operations
# Default: 30 minutes for complex scans, 5-10 minutes for focused tests
Agent failure handling
# If an agent fails or times out:
# 1. Log the failure with agent ID and task description
# 2. Continue collecting results from other agents
# 3. Retry failed agent if critical (max 3 attempts)
# 4. Report partial results if retry fails
Result validation
# When agents return, validate their output:
# - Check for findings summary
# - Verify scope compliance
# - Confirm evidence was collected
# If output is missing or malformed, request clarification
When you have multiple independent security tasks (different targets, different attack vectors, different findings to investigate), running them sequentially wastes time. Each investigation is independent and can happen in parallel.
Core principle: Dispatch one agent per independent problem domain. Let them work concurrently.
digraph when_to_use {
"Multiple targets/tasks?" [shape=diamond];
"Are they independent?" [shape=diamond];
"Single agent handles all" [shape=box];
"One agent per target/vector" [shape=box];
"Can they work in parallel?" [shape=diamond];
"Sequential agents" [shape=box];
"Parallel dispatch" [shape=box];
"Multiple targets/tasks?" -> "Are they independent?" [label="yes"];
"Are they independent?" -> "Single agent handles all" [label="no - related"];
"Are they independent?" -> "Can they work in parallel?" [label="yes"];
"Can they work in parallel?" -> "Parallel dispatch" [label="yes"];
"Can they work in parallel?" -> "Sequential agents" [label="no - shared state"];
}
Use when:
Don't use when:
Group tasks by what's being tested:
Each domain is independent — testing the webapp doesn't affect API testing or network service enumeration.
Each agent gets:
# Security engagement with parallel agents
Task("Enumerate and test webapp at 10.10.10.50:443")
Task("Fuzz and test API endpoints at 10.10.10.50:8443")
Task("Scan and enumerate network services on 10.10.10.0/24")
# All three run concurrently
When agents return:
Good agent prompts are:
Enumerate and test the web application at https://10.10.10.50:
1. Run directory fuzzing with ffuf against common wordlists
2. Run nuclei with web-specific templates
3. Test for OWASP Top 10 — focus on injection and auth bypass
4. Check for sensitive file exposure (.git, .env, backup files)
Scope constraints:
- Target ONLY 10.10.10.50 ports 80 and 443
- Do NOT attempt DoS or brute-force attacks
- Do NOT pivot to other hosts
Return: List of verified findings with severity, evidence (request/response),
and reproduction steps. Use superhackers:writing-security-reports finding format.
❌ Too broad: "Test the entire network" — agent gets overwhelmed ✅ Specific: "Enumerate services on 10.10.10.0/24 subnet" — focused scope
❌ No context: "Find the SQL injection" — agent doesn't know where ✅ Context: Provide the target URL, parameter names, and observed behavior
❌ No constraints: Agent might scan out of scope or trigger alerts ✅ Constraints: "Target ONLY this IP range" or "Do NOT run active exploits"
❌ Vague output: "Report what you find" — inconsistent findings format ✅ Specific: "Return findings using superhackers:writing-security-reports format with evidence"
Chained vulnerabilities: One finding feeds the next — investigate together first Shared target rate limiting: Multiple agents hitting same host triggers WAF/IDS Exploratory recon: You don't know the attack surface yet — enumerate first Shared state: Agents would interfere (same authenticated session, same target host under rate limits)
Scenario: Full security assessment of a corporate environment with 3 distinct attack surfaces
Targets discovered during recon:
Decision: Independent domains — webapp testing doesn't affect API testing or internal network assessment
Dispatch:
Agent 1 → webapp-pentesting on portal.target.com (nuclei + ffuf + manual OWASP testing)
Agent 2 → api-pentesting on api.target.com (endpoint fuzzing + auth bypass + BOLA testing)
Agent 3 → infra-pentesting on 10.10.10.0/24 (nmap + service enumeration + SMB testing)
Results:
Integration: All findings independent, no scope conflicts, consolidated into unified report
Time saved: 3 attack surfaces assessed in parallel vs sequentially
After agents return:
Called by:
Pairs with:
From security engagement: