npx claudepluginhub mukul975/anthropic-cybersecurity-skills --plugin cybersecurity-skillsThis skill uses the workspace's default tool permissions.
- You have collected raw OSINT data from multiple tools and sources but need to identify connections, contradictions, and patterns across them.
Applies Acme Corporation brand guidelines including colors, fonts, layouts, and messaging to generated PowerPoint, Excel, and PDF documents.
Builds DCF models with sensitivity analysis, Monte Carlo simulations, and scenario planning for investment valuation and risk assessment.
Calculates profitability (ROE, margins), liquidity (current ratio), leverage, efficiency, and valuation (P/E, EV/EBITDA) ratios from financial statements in CSV, JSON, text, or Excel for investment analysis.
requests, json, and csv librariespip install sherlock-project)pip install theHarvester)Create the working directory for all OSINT outputs:
mkdir -p /tmp/osint
Enumerate usernames across platforms with Sherlock:
sherlock "targetusername" --output /tmp/osint/sherlock-results.txt --csv
Harvest emails, subdomains, and hosts with theHarvester:
theHarvester -d targetdomain.com -b all -f /tmp/osint/harvester-results.json
Run a SpiderFoot passive scan via REST API:
curl -s http://localhost:5001/api/scan/start \
-d "scanname=target-recon&scantarget=targetdomain.com&usecase=passive" \
| jq '.scanid'
Export SpiderFoot results when scan completes:
SCAN_ID="<scanid_from_step_3>"
curl -s "http://localhost:5001/api/scan/${SCAN_ID}/results?type=all" \
-o /tmp/osint/spiderfoot-results.json
Query breach databases for email exposure (example with HIBP API):
curl -s -H "hibp-api-key: ${HIBP_KEY}" \
-H "User-Agent: OSINT-Correlation-Skill" \
"https://haveibeenpwned.com/api/v3/breachedaccount/target@example.com" \
-o /tmp/osint/breach-results.json
Normalize all collected data into a common schema. Create a unified JSON structure that tags each finding with its source, timestamp, and data type:
cat > /tmp/osint/normalize.py << 'EOF'
import json, csv, sys, os
from datetime import datetime
findings = []
# Normalize Sherlock CSV results
sherlock_path = "/tmp/osint/sherlock-results.txt"
if os.path.exists(sherlock_path):
with open(sherlock_path) as f:
for row in csv.DictReader(f):
findings.append({
"source": "sherlock",
"type": "social_profile",
"platform": row.get("name", ""),
"url": row.get("url_user", ""),
"username": row.get("username", ""),
"status": row.get("status", ""),
"collected_at": datetime.utcnow().isoformat()
})
# Normalize theHarvester JSON results
harvester_path = "/tmp/osint/harvester-results.json"
if os.path.exists(harvester_path):
with open(harvester_path) as f:
data = json.load(f)
for email in data.get("emails", []):
findings.append({
"source": "theHarvester",
"type": "email",
"value": email,
"collected_at": datetime.utcnow().isoformat()
})
for host in data.get("hosts", []):
findings.append({
"source": "theHarvester",
"type": "hostname",
"value": host,
"collected_at": datetime.utcnow().isoformat()
})
# Normalize SpiderFoot results
sf_path = "/tmp/osint/spiderfoot-results.json"
if os.path.exists(sf_path):
with open(sf_path) as f:
for item in json.load(f):
findings.append({
"source": "spiderfoot",
"type": item.get("type", "unknown"),
"value": item.get("data", ""),
"module": item.get("module", ""),
"collected_at": datetime.utcnow().isoformat()
})
with open("/tmp/osint/normalized-findings.json", "w") as f:
json.dump(findings, f, indent=2)
print(f"Normalized {len(findings)} findings from {len(set(f['source'] for f in findings))} sources")
EOF
python3 /tmp/osint/normalize.py
Send normalized findings to an LLM for cross-source correlation analysis:
cat > /tmp/osint/correlate.py << 'PYEOF'
import json, os
from openai import OpenAI # or anthropic, ollama, etc.
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
with open("/tmp/osint/normalized-findings.json") as f:
findings = json.load(f)
correlation_prompt = f"""You are an OSINT analyst. Analyze these findings collected
from multiple sources and produce a correlation report.
For each identity or entity you detect:
1. List all linked accounts/profiles with the evidence connecting them.
2. Assign a confidence score (0.0-1.0) for each linkage based on:
- Exact username match across platforms (high)
- Similar usernames with shared metadata (medium)
- Same email in breach data and registration (high)
- Co-occurring infrastructure (IP, domain) (medium)
- Temporal correlation of account creation dates (low-medium)
3. Identify contradictions or potential false positives.
4. Flag high-risk exposures (breached credentials, PII leaks, infrastructure overlaps).
5. Produce a structured JSON report.
Raw findings:
{json.dumps(findings[:500], indent=2)}
"""
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are an expert OSINT analyst specializing in identity correlation and link analysis."},
{"role": "user", "content": correlation_prompt}
],
temperature=0.1,
response_format={"type": "json_object"}
)
report = json.loads(response.choices[0].message.content)
with open("/tmp/osint/correlation-report.json", "w") as f:
json.dump(report, f, indent=2)
print(json.dumps(report, indent=2))
PYEOF
python3 /tmp/osint/correlate.py
Perform entity resolution — deduplicate and merge related identities:
cat > /tmp/osint/resolve.py << 'PYEOF'
import json
with open("/tmp/osint/correlation-report.json") as f:
report = json.load(f)
# Extract entities and build a link graph
entities = report.get("entities", [])
print(f"Identified {len(entities)} distinct entities")
for entity in entities:
name = entity.get("identifier", "unknown")
confidence = entity.get("confidence", 0)
links = entity.get("linked_accounts", [])
risk = entity.get("risk_level", "unknown")
print(f" [{confidence:.0%}] {name} — {len(links)} linked accounts — risk: {risk}")
PYEOF
python3 /tmp/osint/resolve.py
Generate a final intelligence profile in Markdown:
cat > /tmp/osint/report.py << 'PYEOF'
import json
from datetime import datetime
with open("/tmp/osint/correlation-report.json") as f:
report = json.load(f)
md = f"# OSINT Correlation Report\n\n"
md += f"**Generated:** {datetime.utcnow().isoformat()}Z\n\n"
md += "## Entity Profiles\n\n"
for entity in report.get("entities", []):
eid = entity.get("identifier", "Unknown")
conf = entity.get("confidence", 0)
md += f"### {eid} (Confidence: {conf:.0%})\n\n"
md += "| Source | Platform | Evidence |\n|--------|----------|----------|\n"
for link in entity.get("linked_accounts", []):
md += f"| {link.get('source','')} | {link.get('platform','')} | {link.get('evidence','')} |\n"
md += f"\n**Risk Level:** {entity.get('risk_level', 'N/A')}\n\n"
for flag in entity.get("flags", []):
md += f"- ⚠️ {flag}\n"
md += "\n"
with open("/tmp/osint/intelligence-profile.md", "w") as f:
f.write(md)
print("Report written to /tmp/osint/intelligence-profile.md")
PYEOF
python3 /tmp/osint/report.py
Optional — Import correlation graph into Maltego for visualization:
# Export entities as Maltego-compatible CSV for manual import
cat > /tmp/osint/maltego_export.py << 'PYEOF'
import json, csv
with open("/tmp/osint/correlation-report.json") as f:
report = json.load(f)
with open("/tmp/osint/maltego-import.csv", "w", newline="") as f:
writer = csv.writer(f)
writer.writerow(["Entity Type", "Value", "Linked To", "Link Label", "Confidence"])
for entity in report.get("entities", []):
for link in entity.get("linked_accounts", []):
writer.writerow([
link.get("type", "Alias"),
link.get("value", ""),
entity.get("identifier", ""),
link.get("evidence", ""),
link.get("confidence", "")
])
print("Maltego CSV exported to /tmp/osint/maltego-import.csv")
PYEOF
python3 /tmp/osint/maltego_export.py
| Concept | Description |
|---|---|
| Cross-Source Correlation | Matching identifiers (usernames, emails, IPs) across independent OSINT sources to establish entity linkage |
| Confidence Scoring | Assigning probabilistic confidence (0.0–1.0) to each linkage based on evidence strength and corroboration |
| Entity Resolution | Deduplicating and merging records that refer to the same real-world entity across fragmented datasets |
| False Positive Detection | Using AI reasoning to identify coincidental matches versus genuine identity links |
| Multi-Vector Intelligence | Combining findings from social media, DNS, breach data, and infrastructure into a single threat picture |
| Link Analysis | Graph-based examination of relationships between entities, accounts, and infrastructure |
| Tool | Role in Workflow |
|---|---|
| Sherlock | Username enumeration across 400+ social platforms |
| theHarvester | Email, subdomain, and host discovery from public sources |
| SpiderFoot | Automated OSINT collection across 200+ modules |
| Maltego | Graph-based visualization of entity relationships |
| LLM API (GPT-4, Claude, Ollama) | Cross-source reasoning, pattern detection, and confidence scoring |
| HaveIBeenPwned | Breach exposure and credential leak detection |
The final output is a structured JSON correlation report and a Markdown intelligence profile containing:
{
"meta": {
"target": "targetdomain.com",
"sources_used": ["sherlock", "theHarvester", "spiderfoot", "hibp"],
"total_findings": 247,
"generated_at": "2025-01-15T14:30:00Z"
},
"entities": [
{
"identifier": "john.target",
"confidence": 0.92,
"linked_accounts": [
{
"source": "sherlock",
"platform": "GitHub",
"value": "john.target",
"evidence": "Exact username match, bio references targetdomain.com",
"confidence": 0.95
}
],
"risk_level": "high",
"flags": [
"Credentials exposed in 2 breaches (2022, 2023)",
"Admin email for targetdomain.com found in public WHOIS"
]
}
],
"contradictions": [],
"recommendations": []
}