Format security findings into HackerOne-ready vulnerability reports with automatic scope/asset/evidence injection, CVSS rationale, vulnerability chaining, and program-specific context
From greyhatccnpx claudepluginhub overtimepog/greyhatcc --plugin greyhatccThis skill uses the workspace's default tool permissions.
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Enables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
/greyhatcc:h1-report <finding_id or description> [program_name]
{{ARGUMENTS}} is parsed automatically:
No format specification needed — detect and proceed.
Before executing this skill:
.greyhatcc/scope.json — verify target is in scope, note exclusions.greyhatcc/hunt-state.json — check active phase, resume contextfindings_log.md, tested.json, gadgets.json — avoid duplicating workEvery report MUST begin by loading context. Never write from memory alone.
Determine the program from the finding or argument. The program directory is bug_bounty/<program>_bug_bounty/.
Required reads — do NOT skip any:
1. bug_bounty/<program>_bug_bounty/scope.md → Scope, assets, exclusions, rules, bounty tiers
2. bug_bounty/<program>_bug_bounty/findings_log.md → All findings (for chaining + dedup)
3. bug_bounty/<program>_bug_bounty/reports/*.md → Existing reports (for cross-references)
4. evidence/<finding_id>/* → All evidence files for this finding
5. .greyhatcc/scope.json → Machine-readable scope (if exists)
If any file is missing, note it but continue — do not halt.
From the loaded files, extract and hold:
api-au.syfe.com not just "the API")BEFORE writing the report, answer these questions:
> **HOLD** ... notice at the topThe report file is saved to: bug_bounty/<program>_bug_bounty/reports/<number>_<short_name>.md
Use sequential numbering matching the findings log.
# [Vulnerability Type] in [Exact Asset Name] Allows [Specific Impact to Users/Business]
**Researcher:** overtimedev
**Date:** YYYY-MM-DD
**Asset:** <exact asset name from scope> (<asset type>)
**Program:** <program name> (<HackerOne URL>)
---
## Severity
**<SEVERITY_WORD>**
CVSS v3.1 Vector: `CVSS:3.1/AV:../AC:../PR:../UI:../S:../C:../I:../A:..`
**Score: X.X**
| Metric | Value | Rationale |
|---|---|---|
| Attack Vector | Network/Adjacent/Local/Physical | Why this value |
| Attack Complexity | Low/High | What preconditions exist |
| Privileges Required | None/Low/High | What auth is needed |
| User Interaction | None/Required | Does victim need to act |
| Scope | Unchanged/Changed | Does it cross trust boundaries |
| Confidentiality | None/Low/High | What data is exposed |
| Integrity | None/Low/High | What can be modified |
| Availability | None/Low/High | What is disrupted |
**Every CVSS metric MUST have a written rationale.** Do not just pick values — justify each one with specifics about this vulnerability. Common mistakes to avoid:
- AC:L when a precondition exists (subdomain takeover, specific config, race window)
- S:U when the vuln crosses a trust boundary (e.g., API vuln exploited from different origin)
- PR:N when you actually need a low-priv account
- Inflating C/I/A beyond what the PoC demonstrates
## Vulnerability Type
- **CWE-XXX** — Description (primary)
- **CWE-YYY** — Description (secondary, if applicable)
## TLDR
3 sentences maximum:
1. What is vulnerable and where (exact URL/endpoint)
2. What an attacker can do (specific actions, not generic "compromise")
3. What is the real-world impact (user count, data type, financial exposure, regulatory)
## Description
Detailed technical description. Include:
- The root cause mechanism (not just symptoms)
- Why the vulnerability exists (misconfiguration, logic flaw, missing validation)
- What the attacker controls and what they don't
- Boundary analysis: what works, what doesn't, what's correctly handled
- If this is a chain: explain each link and how they connect
### [Subsection per chain component if applicable]
## Steps to Reproduce
Numbered steps with EXACT commands. Every step must be copy-pasteable.
1. Each step has a specific command (curl, browser action, script)
2. Include ALL required headers (especially program-required research headers)
3. Show expected output after each step
4. Include a test matrix if multiple endpoints/origins/methods were tested
```bash
# Include the program's required research header
curl -sk \
-H "X-HackerOne-Research: overtimedev" \
-H "Origin: https://example.com" \
-D - -o /dev/null \
"https://target.example.com/endpoint"
Observed response:
HTTP/2 200
[relevant headers]
Full working PoC code — not pseudocode. Include:
Structure impact by CIA triad categories that apply:
Specific data that can be read. Name the data types, estimate scope.
Specific mutations that can be performed. Name the endpoints/actions.
What can be disrupted. Quantify if possible.
Include this section if ANY other findings relate to this one.
| Finding | Relationship | Combined Impact |
|---|---|---|
| #XXX — [Title] (Report #XXX) | Prerequisite / Amplifier / Parallel path | How severity changes when combined |
Explicitly answer: "Does bug A produce input for bug B?"
Include code examples in the target's tech stack when possible (Spring Boot for Java APIs, Express for Node, etc.)
Include verbatim HTTP request/response pairs. Reference evidence files:
See: evidence/<finding_id>/request.txt
See: evidence/<finding_id>/response.txt
See: evidence/<finding_id>/screenshot.png
| Property | Value |
|---|---|
| Affected URL | https://exact-url.com/path |
| Method | GET/POST/etc |
| Auth Required | Yes (session cookie) / No |
| Backend Framework | Identified tech |
| CDN/WAF | Identified protection |
| CWE | CWE-XXX |
| CVSS v3.1 | Vector — Score Severity |
---
## Auto-Validation (Runs Automatically)
After writing the report file, the **report-validator hook** automatically runs and checks:
- Asset name matches scope
- Vuln type not on exclusion list
- Steps to Reproduce exist with curl commands
- Required research headers present
- CVSS score has rationale
- Not a duplicate of existing submission
If the hook reports errors, **fix them before proceeding**. For a full 8-gate validation, run `/greyhatcc:validate <report_file>`.
## Quality Checklist (Verify Before Saving)
Run through this before finalizing:
- [ ] **Title** follows `[Type] in [Asset] allows [Impact]` — is it under 100 chars?
- [ ] **Asset name** matches EXACTLY what's in the program scope
- [ ] **Finding is NOT on the exclusion list** — or you've proven the exclusion doesn't apply
- [ ] **CVSS rationale** exists for every metric — no unjustified values
- [ ] **Steps to Reproduce** are copy-pasteable — includes ALL headers, exact URLs
- [ ] **Program research header** is in every curl command (e.g., `X-HackerOne-Research: overtimedev`)
- [ ] **Impact is specific** — names data types, user actions, not "an attacker could compromise the system"
- [ ] **Evidence files exist** and are referenced
- [ ] **Chain table populated** if other findings relate
- [ ] **Remediation is actionable** — not just "fix the bug" but specific code/config changes
- [ ] **No false positives** — every claim is backed by deterministic proof in the evidence
- [ ] **Scope hold notice** added at top if the asset is questionably in-scope
## Common Rejection Reasons (Avoid These)
| Rejection Reason | How to Prevent |
|---|---|
| "Out of scope" | Verify asset is listed in scope.md. Add HOLD notice if uncertain |
| "Informational" / "N/A" | Prove exploitable impact, not just theoretical. Include working PoC |
| "Duplicate" | Check findings_log.md and existing reports for overlap. If similar, chain instead |
| "Won't fix" / "Accepted risk" | Focus on business impact and regulatory exposure, not just technical severity |
| "Not reproducible" | Steps must be copy-pasteable. Test your own steps before writing |
| "CORS without data exfil" | Always include a PoC page showing actual data read cross-origin |
| "Open redirect without impact" | Chain with OAuth token theft or phishing escalation |
| "Missing cookie flags" | Almost always OOS. Don't submit unless you chain it |
| "Severity inflated" | Justify every CVSS metric. Triage teams downgrade aggressive scoring |
## Delegation
- Standard reports → `report-writer` (sonnet) with this skill as instruction
- Executive/complex chain reports → `report-writer-high` (opus)
- Quick finding notes → `report-writer-low` (haiku) with findings-log skill instead
**The report-writer agent MUST read all context files listed in Step 2 before writing.**
## Agent Dispatch Protocol
When delegating to agents via Task(), ALWAYS:
1. **Prepend worker preamble**: "[WORKER] Execute directly. No sub-agents. Output ≤500 words. Save findings to disk. 3 failures = stop and report."
2. **Set max_turns**: haiku=10, sonnet=25, opus=40
3. **Pass full context**: scope, exclusions, existing findings, recon data
4. **Route by complexity**: Quick checks → haiku agents (-low). Standard work → sonnet agents. Deep analysis/exploitation → opus agents.
## State Updates
After completing this skill:
1. Update `tested.json` — record what was tested (asset + vuln class)
2. Update `gadgets.json` — add any informational findings with provides/requires tags for chaining
3. Update `findings_log.md` — log any confirmed findings with severity
4. Update hunt-state.json if in active hunt — set lastActivity timestamp