Multi-gate report quality validation - checks asset accuracy, scope compliance, dedup, proof, CVSS rationale, exclusion list, and submission readiness
From greyhatccnpx claudepluginhub overtimepog/greyhatcc --plugin greyhatccThis skill uses the workspace's default tool permissions.
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Enables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
/greyhatcc:validate <report_file or finding_id> [program_name]
{{ARGUMENTS}} is parsed automatically:
No format specification needed — detect and proceed.
Runs every quality gate on a report before it gets submitted to HackerOne. This is the last line of defense against rejected reports.
Before executing this skill:
.greyhatcc/scope.json — verify target is in scope, note exclusions.greyhatcc/hunt-state.json — check active phase, resume contextfindings_log.md, tested.json, gadgets.json — avoid duplicating workThe #1 cause of rejected reports.
**Asset:** fieldCommon failures:
Fix: Replace the asset name with the exact string from the program's scope table.
The #2 cause of rejected reports.
The #3 cause of rejected reports: "not reproducible" or "theoretical."
alert(1))[Type] in [Asset] allows [Impact] formatX-HackerOne-Research: overtimedev)## Report Validation: <report_file>
| Gate | Status | Details |
|------|--------|---------|
| 1. Asset Accuracy | PASS/FAIL | [details] |
| 2. Scope Compliance | PASS/FAIL | [details] |
| 3. Exclusion List | PASS/FAIL | [details] |
| 4. Duplicate Check | PASS/FAIL | [details] |
| 5. Proof of Exploitation | PASS/FAIL | [details] |
| 6. CVSS Integrity | PASS/FAIL | [details] |
| 7. Report Completeness | PASS/FAIL | [details] |
| 8. Program Rules | PASS/FAIL | [details] |
### Overall: [READY TO SUBMIT / NEEDS FIXES / DO NOT SUBMIT]
### Required Fixes (if any):
1. [specific fix needed]
2. [specific fix needed]
PASS if ALL:
- Report contains "**Asset:**" field
- Asset string exactly matches an entry in scope.md authorized.assets
- Asset type matches (URL, Domain, Android App, etc.)
- URLs in Steps to Reproduce are on the declared asset
FAIL if ANY:
- No asset field in report
- Asset name is paraphrased ("the API" instead of "api-au.syfe.com")
- Asset uses wildcard when a specific subdomain was tested
- Report tested on UAT but asset field says production domain
AUTO-FIX: Replace asset name with exact string from scope.md asset list
PASS if ALL:
- Asset is listed in scope.md in-scope table
- If wildcard match: specific subdomain is not in excluded list
- If UAT: program rules explicitly allow UAT findings
- No excluded domains appear in any curl command
FAIL if ANY:
- Asset not found in scope.md in-scope list
- Subdomain matches an excluded domain pattern
- UAT-only finding when program requires prod validation
- Steps include requests to out-of-scope domains
AUTO-FIX: Add HOLD notice at top of report if asset is questionable. Remove OOS domains from steps.
PASS if ALL:
- Vulnerability type is NOT on excluded.vulnTypes list
- OR: Report explicitly proves the exclusion does not apply (e.g., "CORS without exfil" excluded but report has working exfil PoC)
- OR: Finding is part of a chain that elevates it past the exclusion
FAIL if ANY:
- Vuln type matches excluded.vulnTypes exactly
- Report does not address why the exclusion doesn't apply
- No chain documented that overcomes the exclusion
AUTO-FIX: If chainable, add chain documentation section. If not, recommend DO NOT SUBMIT.
PASS if ALL:
- Finding ID not in submissions.json
- No identical (same endpoint + same vuln type) entry in findings_log.md with status "Reported"
- No report in reports/ directory covering the same vulnerability
- Hacktivity check returns LOW or CLEAR dupe risk
FAIL if ANY:
- Finding already in submissions.json
- Root-cause duplicate of another finding
- Hacktivity check returns HIGH dupe risk
AUTO-FIX: If root-cause dupe, suggest combining into the existing report. If hacktivity dupe, suggest DIFFERENTIATE approach.
PASS if ALL:
- Steps to Reproduce contain copy-pasteable curl commands or scripts
- Every command includes ALL required headers (research header, auth tokens)
- Expected output is shown after each step
- A working PoC exists (not pseudocode)
- Evidence files referenced in report actually exist on disk
- Vuln-specific proof meets bar:
* CORS: PoC HTML page that demonstrates actual cross-origin data read
* XSS: Payload fires in realistic context (not just alert(1) in self-XSS)
* SSRF: Proof of internal access beyond DNS callback
* IDOR: Cross-user data access (not own data with different ID format)
FAIL if ANY:
- No curl commands in Steps to Reproduce
- Missing required research headers in commands
- No expected output shown
- PoC is theoretical/pseudocode
- Evidence files referenced but don't exist
- CORS without exfil PoC, SSRF with only DNS callback, IDOR accessing own data
AUTO-FIX: Re-run proof-validator skill to capture fresh evidence. Add missing headers to commands. Create PoC script.
PASS if ALL:
- CVSS vector string present and syntactically valid (CVSS:3.1/AV:../AC:..)
- Every metric has a written rationale (not just the value)
- Computed score matches the vector (no manual inflation)
- Conservative checks pass:
* AC:L → no preconditions needed
* PR:N → truly unauthenticated
* S:C → actually crosses trust boundary
* Score >= 9.0 → RCE, full ATO, or mass data breach evidence
* Score >= 7.0 → more than information disclosure
FAIL if ANY:
- No CVSS vector string
- Missing rationale for any metric
- Score doesn't match vector computation
- AC:L claimed but preconditions exist
- PR:N claimed but free account needed
- Inflated score without matching evidence
AUTO-FIX: Recalculate CVSS with conservative values. Add rationale template for each metric.
PASS if ALL:
- Title follows "[Type] in [Asset] allows [Impact]" format
- Title is under 100 characters
- TLDR exists (3 sentences max)
- CWE classification present
- Impact names specific data types / user actions
- Remediation has actionable steps
- Chain table populated if related findings exist
- References include OWASP/CWE links
FAIL if ANY:
- Title is generic ("XSS vulnerability")
- No TLDR
- No CWE
- Impact is vague ("an attacker could compromise the system")
- Remediation says "fix the bug" without specifics
- Related findings exist but no chain table
AUTO-FIX: Generate proper title from finding details. Add CWE lookup. Expand impact with specific data types. Add chain table from gadgets.json.
PASS if ALL:
- Required research headers in every curl command
- Correct test account used (if program provides test accounts)
- No prohibited methods used (no DoS, no social engineering)
- Testing hours respected (if program has testing windows)
FAIL if ANY:
- Missing required headers in any curl command
- Wrong test account or unauthorized account
- Report mentions prohibited testing methods
- Testing conducted outside allowed hours
AUTO-FIX: Add missing headers to all curl commands. Note correct test account.
| Failure | Bad Example | Good Example |
|---|---|---|
| Generic title | "CORS vulnerability" | "CORS Misconfiguration in api-au.syfe.com Allows Authenticated Data Theft via Origin Reflection" |
| Missing asset | "Found on the API" | "Asset: api-au.syfe.com (URL)" |
| Vague impact | "Attacker can access data" | "Attacker can read victim's financial portfolio, bank account numbers, and transaction history" |
| No research header | curl -sk https://target/ | curl -sk -H "X-HackerOne-Research: overtimedev" https://target/ |
| Inflated CVSS | AC:L when subdomain takeover needed | AC:H with rationale: "Requires claiming dangling subdomain" |
| Pseudocode PoC | "Send a request to the endpoint" | curl -sk -H "Origin: https://evil.com" https://api.target.com/endpoint |
/greyhatcc:h1-report after report generationreport-quality-gate agent (haiku)When delegating to agents via Task(), ALWAYS:
After completing this skill:
tested.json — record what was tested (asset + vuln class)gadgets.json — add any informational findings with provides/requires tags for chainingfindings_log.md — log any confirmed findings with severity