Check if a discovered bug has been previously found, reported, or submitted — prevents duplicate submissions and wasted effort
From greyhatccnpx claudepluginhub overtimepog/greyhatcc --plugin greyhatccThis skill uses the workspace's default tool permissions.
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Enables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
/greyhatcc:dedup <finding description or vuln type + endpoint>
{{ARGUMENTS}} is parsed automatically:
No format specification needed — detect and proceed.
Example:
/greyhatcc:dedup "CORS misconfiguration on api.example.com"
/greyhatcc:dedup "IDOR on /api/v2/users/{id}"
/greyhatcc:dedup "exposed actuator on api-au.syfe.com"
Before checking, follow the context-loader protocol:
Read bug_bounty/<program>_bug_bounty/findings_log.md and check:
Output:
LOCAL FINDINGS CHECK:
- Exact match: [YES/NO] — [finding ID if found]
- Root cause match: [YES/NO] — [finding ID if found]
- Related findings: [list any related findings with IDs]
Read all files in bug_bounty/<program>_bug_bounty/reports/*.md (just titles/headers, not full content) and check:
Output:
EXISTING REPORTS CHECK:
- Covered by report: [YES/NO] — [report filename if found]
- Same endpoint in another report: [YES/NO] — [report filename]
- Part of existing chain: [YES/NO] — [chain description if found]
Read bug_bounty/<program>_bug_bounty/submissions.json (if exists) and check:
Output:
SUBMISSION HISTORY CHECK:
- Previously submitted: [YES/NO]
- H1 Report ID: [ID if submitted]
- Status: [status]
- Date submitted: [date]
- Notes: [any notes about triage response]
Read scope.md exclusion list and check:
Output:
EXCLUSION CHECK:
- Vuln type excluded: [YES/NO] — [which exclusion rule]
- Exclusion overcome: [YES/NO] — [how: e.g., "has working PoC with data exfil"]
- Safe to submit: [YES/NO]
Check against known patterns that programs frequently mark as duplicates:
| Pattern | Check |
|---|---|
| Same root cause, different endpoint | "We consider all instances of [vuln] on [service] as one report" |
| Informational + exploitable | If an informational version was already submitted, submitting exploitable version may be marked dupe |
| Cascading findings | If A causes B causes C, only the root cause gets bounty |
| Same finding, different severity argument | Re-submitting a downgraded finding with better impact argument = dupe |
| Known issues | Check if program has public disclosure or known issues list |
| Recently fixed | If the program recently resolved a similar finding, yours might be the same |
This layer is now automated via /greyhatcc:hacktivity.
Run the hacktivity-check skill which uses 3 methods:
site:hackerone.com/reports "program_name" "vulnerability_type"The hacktivity-check skill returns a dupe risk assessment:
This layer is now automated via the dupes.mjs library.
Checks the finding against 24+ patterns of commonly rejected findings:
If a finding matches an ALWAYS_REJECTED pattern, the recommendation is automatically DO NOT SUBMIT unless a chain is identified.
Use the dedicated MCP tool for the most reliable API-based duplicate detection:
Use: mcp__plugin_greyhatcc_hackerone__h1_dupe_check
Arguments: { handle: "<program>", vuln_type: "<type>", asset: "<asset>" }
This tool:
If the API is not configured, this layer is skipped silently.
After running all checks, output a clear recommendation:
## Dedup Check Result
### Finding: [description]
### Program: [program_name]
| Check | Result | Details |
|-------|--------|---------|
| Local findings log | CLEAR/DUPE | [details] |
| Existing reports | CLEAR/DUPE | [details] |
| Submission history | CLEAR/DUPE | [details] |
| Program exclusions | CLEAR/EXCLUDED | [details] |
| Common dupe patterns | CLEAR/RISK | [details] |
| Hacktivity (if checked) | CLEAR/LIKELY_DUPE | [details] |
### Recommendation: [SUBMIT / DO NOT SUBMIT / CHAIN FIRST / NEEDS MORE EVIDENCE]
Reasoning: [why]
| Recommendation | When |
|---|---|
| SUBMIT | All checks clear, finding is unique, has proof, not excluded |
| DO NOT SUBMIT | Finding is a confirmed dupe or is on the exclusion list with no override |
| CHAIN FIRST | Finding is too low alone or is excluded, but could be chained with another finding to overcome |
| NEEDS MORE EVIDENCE | Finding is unique but lacks deterministic proof (theoretical, no PoC) |
| ASK PROGRAM | Asset is questionably in-scope, or finding type is borderline excluded |
When a report IS submitted, update submissions.json:
{
"submissions": [
{
"id": "S-001",
"finding_id": "F-006",
"h1_report_id": null,
"program": "bumba",
"title": "Back-office Cognito pool exposed via unauth GraphQL",
"asset": "exchange-api.bumba.global",
"vuln_type": "CWE-200",
"severity": "HIGH",
"cvss": 7.5,
"date_submitted": "2026-02-24",
"status": "pending",
"triage_response": null,
"bounty": null,
"report_file": "reports/006_cognito_backoffice_exposure.md",
"notes": ""
}
]
}
Update the status field as the report progresses through triage:
pending → submitted, waiting for triagetriaged → accepted by triage teamduplicate → marked as dupe (note original report ID)informational → downgraded to informationalna → marked Not Applicableresolved → vulnerability fixedbounty → bounty awarded (note amount)This skill is called automatically by:
Manual invocation is for when you want to quickly check before investing time in a full report.
osint-researcher-low (haiku) for fast web searchWhen delegating to agents via Task(), ALWAYS:
After completing this skill:
tested.json — record what was tested (asset + vuln class)gadgets.json — add any informational findings with provides/requires tags for chainingfindings_log.md — log any confirmed findings with severity