End-to-end bug bounty workflow from program research through HackerOne-ready report submission
From greyhatccnpx claudepluginhub overtimepog/greyhatcc --plugin greyhatccThis skill uses the workspace's default tool permissions.
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Enables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
/greyhatcc:bounty <program_name or HackerOne URL>
{{ARGUMENTS}} is parsed automatically:
security) → used directly with H1 APINo format specification needed — detect and proceed.
Before executing this skill:
.greyhatcc/scope.json — verify target is in scope, note exclusions.greyhatcc/hunt-state.json — check active phase, resume contextfindings_log.md, tested.json, gadgets.json — avoid duplicating workFor autonomous bug bounty hunting, use /greyhatcc:hunt <program> instead. Hunt mode uses
the v7 event-driven priority-queue architecture that replaces the manual phase workflow below.
Hunt mode advantages over manual workflow:
The manual workflow below is still valid for when you want fine-grained control over each phase.
Use /greyhatcc:program <URL> to automate this entire phase. It uses Playwright browser automation to extract scope from JS-rendered HackerOne pages, Perplexity for supplementary intel, and the HackerOne API when configured.
/greyhatcc:guides <vuln_type>) for attack vectors matching the target tech stackbug_bounty/<program>_bug_bounty/{recon,findings,reports,evidence,scripts,notes}gadgets.json, tested.json, submissions.jsonattack_plan.md with prioritized targetsDelegate to recon skill with parallel agents:
/greyhatcc:takeover on results/greyhatcc:js on all discovered web assets/greyhatcc:cloud for bucket/CDN origin discoverytested.json and gadgets.json with all recon findingsFocus on business logic first (automation handles CVEs):
/greyhatcc:auth for dedicated OAuth/JWT testing/greyhatcc:api for dedicated REST/GraphQL testingBefore testing each endpoint: check tested.json to avoid redundant work.
After each finding: update findings_log.md, gadgets.json, and tested.json.
Check exclusions: verify every finding against the program's non-qualifying list from scope.md.
Delegate to webapp-tester agent for systematic testing. Pass full context (scope, exclusions, existing findings, recon data) per context-loader protocol.
/greyhatcc:gadgets chain — identify all chaining opportunities before writing reports/greyhatcc:dedup on each finding — verify it hasn't been reported or submitted before/greyhatcc:findings/greyhatcc:h1-report (which auto-loads scope, evidence, chain context)submissions.json when reports are submitted to HackerOneSTOP recon and START testing when:
- All in-scope domains have been enumerated and resolved
- Tech stack identified for all primary assets
- JS bundles analyzed for at least the top 3 assets
- WAF/CDN identified for all web assets
- attack_plan.md has been written with prioritized targets
- 60-70% of allocated time has been spent on recon
REPORT immediately if:
- Finding is HIGH or CRITICAL standalone
- Finding has working PoC with clear impact
- Finding is not on the exclusion list
CHAIN FIRST if:
- Finding is LOW or MEDIUM standalone
- Finding is on the exclusion list but has chain potential
- gadgets.json has a complementary gadget (provides/requires match)
- Classic chain pattern applies (self-XSS+CSRF, redirect+OAuth, SSRF+metadata)
DO NOT REPORT if:
- Finding is on the ALWAYS_REJECTED dupe list
- Finding cannot be chained
- Finding has no working PoC
- Finding is a duplicate (dedup check fails)
MOVE to next target when:
- All vuln classes from OWASP Top 10 tested
- All tech-stack-specific tests run (e.g., GraphQL tests for GraphQL endpoints)
- tested.json shows full coverage for this asset
- No more untested endpoints from recon data
- Diminishing returns: 3+ consecutive tests with no findings
When choosing between multiple programs:
ROI Score = (max_bounty * asset_density * tech_complexity) / (competition * program_age)
Factors:
- max_bounty: Critical tier maximum ($)
- asset_density: Number of in-scope assets (more = more surface area)
- tech_complexity: Score 1-5 based on tech stack complexity
* 5: GraphQL + OAuth + microservices + mobile + cloud
* 4: REST API + JWT + cloud services
* 3: Standard web app + API
* 2: Simple web app, minimal API
* 1: Static site, minimal attack surface
- competition: Estimated researcher count (from hacktivity volume)
* New program (< 4 weeks): competition = 1 (LOW — best ROI)
* Active program (4-12 weeks): competition = 2
* Mature program (> 12 weeks): competition = 3
- program_age: Weeks since launch (newer = more low-hanging fruit)
After completing this skill:
tested.json — record what was tested (asset + vuln class)gadgets.json — add any informational findings with provides/requires tags for chainingfindings_log.md — log any confirmed findings with severity