From orca-skills
Analyzes impact of fixing Orca security alerts: what other alerts/paths close, what production workflows break, post-fix environment. Use for blast radius or remediation consequence queries.
npx claudepluginhub orcasecurity/orca-skills --plugin orca-skillsThis skill uses the workspace's default tool permissions.
Answers two questions:
Triages Orca Security alerts by ID with behavioral timelines, risk assessments, blast radius calculations, and progressive disclosure for summaries, investigations, and remediations.
Reviews SentinelOne XSPM misconfigurations across AWS, Azure, GCP, Kubernetes, identity, and IaC. Covers detection, compliance standards, MITRE ATT&CK mappings, remediation steps, evidence, and MSP posture workflows.
Investigate a runtime threat detected by Sysdig end-to-end. Surfaces the highest-priority threat, enumerates affected images, scores vulnerability vs runtime correlations on a 1-5 confidence scale, deep-dives into network blast radius or suspicious-binary VT lookups depending on the event class, and hands the case off to Jira or PagerDuty. Triggers on: "investigate runtime threat", "what is this Falco alert", runtime incident triage, SOC investigation, Falco alert analysis.
Share bugs, ideas, or general feedback.
Answers two questions:
Given a single Orca alert, this skill analyzes BOTH sides of remediation impact — the security improvements AND the operational risk of applying the fix.
/orca-impact-analysis <alert-id>
/orca-impact-analysis orca-3380725
Or natural language:
Fetch the alert with get_alert and determine:
Categorize the fix type:
Fix Type Examples Cascade Pattern
─────────────────────────────────────────────────────────────────────
Configuration fix Enable MFA, restrict access Closes same-asset alerts that depend on this config
Permission fix Remove role, tighten policy Breaks attack paths, closes privilege escalation chains
Exposure fix Add auth, block port, restrict IP Closes exposure alerts, breaks external attack paths
Patch/update Apply CVE patch, upgrade version Closes all CVE alerts for that package version
Removal Delete unused resource Closes ALL alerts on that asset
Use get_asset_related_alerts_summary (with the asset UUID from the alert's Inventory.id) to find all alerts on the same asset.
For each related alert, determine if fixing the original alert would also resolve it:
Direct resolution — the related alert shares the same root cause:
Indirect improvement — the fix reduces risk but doesn't fully close the related alert:
No impact — unrelated finding on the same asset:
Classification logic:
IF related_alert.alert_type == original_alert.alert_type THEN
"Direct close" — same finding, duplicate
ELSE IF related_alert.remediation overlaps original_alert.remediation THEN
"Direct close" — same fix resolves both
ELSE IF related_alert.risk_findings references same config/permission/exposure THEN
"Risk reduction" — fix reduces exploitability or impact
ELSE IF related_alert is in attack path that includes original alert THEN
"Attack path broken" — fix breaks the chain
ELSE
"No impact" — independent finding
Use get_alert_attack_path_data (with the alert_id) to find attack paths that include this alert.
For each attack path:
Also use get_asset_related_attack_paths_summary for the asset to get the full picture.
Use get_alerts_with_similar_alert_type (with the alert_id and alert_type) to find the same issue across other assets.
This answers: "If I fix this pattern everywhere (not just this one asset), how many alerts close?"
Group results by:
From the original alert's RelatedCompliances and RuleId:
get_control_test_alerts with the RuleId to find all alerts triggered by the same compliance controlUse these tools to get actual compliance scores before and after the fix:
get_enabled_compliance_frameworks — returns all enabled frameworks with their current avg_score_percent and test_results (PASS/FAIL counts). This gives the org-wide baseline.
get_compliance_framework_stats_for_asset — returns per-asset compliance stats for a specific framework. Parameters: framework_id (e.g., "pci_dss_v4.0.1") and group_unique_id (from the alert's GroupUniqueId). Returns:
score.test_results.PASS / FAIL counts for this assetscore.avg_score_percent — current score for this asset in this frameworkalerts.total_count — number of alerts from this asset in this frameworkCalculate the delta: For each framework where this asset has FAIL > 0:
current_score = avg_score_percent
total = PASS + FAIL
after_fix_score = round(((PASS + 1) / total) * 100)
delta = after_fix_score - current_score
Present the score change table — ALWAYS show this when data is available:
Framework Current After Fix Change
─────────────────────────────────────────────────────
PCI DSS 4.0.1 99% → 100% +1%
DSPM Best Practices 98% → 100% +2%
SOC 2 94% → 95% +1%
Execution: In Phase 2, call get_enabled_compliance_frameworks to get framework IDs. Then for the top 5-8 most relevant frameworks (from the alert's RelatedCompliances that overlap with enabled frameworks), call get_compliance_framework_stats_for_asset in parallel.
If the compliance tools return no data, fall back to qualitative:
RelatedCompliances)get_control_test_alerts)Always show the number of frameworks affected and the number of alerts that would close for the same rule across the environment — these are always available from the alert data.
This is the critical differentiator. Before the user applies a fix, show them what production workflows, automation, or services might break.
Use these MCP tools to understand what actively depends on the current (insecure) state:
search_cdr_events — Search CloudTrail/audit log events filtered by the affected identity or resource. This shows what the asset is actively DOING in production.
actors (the identity being modified)targets (the resource being locked down)services to see which AWS/Azure/GCP services are in useget_cdr_events_grouped_by_event_name — Aggregate events by action name. This reveals usage patterns:
get_aws_effective_permissions_policy_on_asset — For IAM assets, compare current permissions vs. recommended least-privilege policy. The DELTA between current and recommended is exactly what would be removed — and any of those permissions actively used in CloudTrail are what would break.
Linked CloudTrail events — From get_asset_related_alerts_summary, related alerts often embed CloudTrail events with user-agents, source IPs, and action details that reveal who/what is using the asset.
get_linked_entities_data — Get detailed linked entities (e.g., roles assumed by this identity, instances accessed, buckets touched).
Fix Type What Breaks How to Detect
──────────────────────────────────────────────────────────────────────────────
Enable MFA Automated processes using the CDR events with service user-agents
identity (can't respond to MFA (boto3, aws-cli, terraform, sdk),
prompt). API key access is NOT API key usage in CloudTrail
affected by MFA.
Remove role/permission Anything relying on the removed Compare effective permissions vs.
permissions. Services, Lambda CDR events — any action in CDR that
functions, CI/CD pipelines. uses a removed permission will fail.
Block port / add auth Clients connecting without auth. CDR network events, access logs,
Health checks, monitoring agents, linked entities showing connections.
partner integrations.
Patch / upgrade API breaking changes, dependency Check release notes for breaking
incompatibilities, config format changes. Look at linked services
changes. and their version requirements.
Delete resource Everything that references this Linked entities count, CloudTrail
resource. DNS, load balancers, events targeting this resource,
application configs. dependent services.
For each fix type, run the appropriate analysis:
For MFA enablement (identity alerts):
1. Query CDR events for the identity over last 30 days
2. Group by user-agent:
- "boto3", "aws-sdk", "terraform", "jenkins", "github-actions" → AUTOMATION (will break)
- "console.amazonaws.com", "Mozilla", "Chrome" → HUMAN (will get MFA prompt)
- "kali", unknown agents → SUSPICIOUS (should break — that's the point)
3. Group by source IP:
- Known CIDR ranges → internal/VPN (expected)
- Unknown IPs → external (investigate)
4. Group by time pattern:
- Regular intervals (hourly, daily) → CRON/AUTOMATION
- Business hours only → HUMAN
- 24/7 consistent → SERVICE
5. For each automation pattern found:
- Identify the service/workflow
- Assess if it uses access keys (NOT affected by MFA) or console (affected)
- Flag as "WILL BREAK" or "NOT AFFECTED"
For permission removal (IAM alerts):
1. Get effective permissions policy (current vs recommended)
2. Compute the delta: permissions that would be REMOVED
3. Query CDR events for the identity
4. For each event action, check if it falls within the removed permissions
5. If action is in removed permissions AND has been used recently → "WILL BREAK"
6. If action is in removed permissions but NOT used → "SAFE TO REMOVE"
7. Present the breakdown: which specific permissions are actively used
For exposure fixes (network/auth alerts):
1. Query CDR events targeting the resource
2. Identify all clients/IPs that connect
3. Classify: internal monitoring, health checks, user traffic, API consumers
4. For each client type:
- Can it authenticate after the fix? → "NEEDS UPDATE"
- Is it a health check that expects 200? → "WILL FAIL"
- Is it legitimate user traffic? → "WILL GET AUTH PROMPT"
Present findings in a dedicated section:
───────────────────────────────────────────────────────────────────
BREAKAGE RISK — What might stop working
───────────────────────────────────────────────────────────────────
RISK LEVEL: <LOW / MEDIUM / HIGH / CRITICAL>
ACTIVE USAGE DETECTED (last 30 days):
Total events: X
Unique actions: Y
Unique user-agents: Z
AUTOMATION / SERVICES THAT MAY BREAK:
[!] <service/workflow> — <why it breaks>
Evidence: <X events>, user-agent: <agent>, pattern: <hourly/daily>
Mitigation: <how to update the service to work with the fix>
[!] <service/workflow> — <why it breaks>
...
HUMAN WORKFLOWS AFFECTED:
[~] <workflow> — <what changes for humans>
Impact: <MFA prompt / new auth flow / permission denied>
Mitigation: <communicate change, update runbooks>
NOT AFFECTED:
[ok] <service> — <why it's safe>
Reason: <uses access keys (MFA doesn't apply) / doesn't use removed permissions>
SUSPICIOUS ACTIVITY (SHOULD break — that's the goal):
[x] <activity> — <why this is the threat we're fixing>
This is the malicious/risky activity. Breaking it is the POINT.
SAFE DEPLOYMENT CHECKLIST:
[ ] Notify <teams/owners> before applying fix
[ ] Update <automation/service> to handle new auth flow
[ ] Schedule change window: <recommended timing>
[ ] Prepare rollback: <how to revert if critical service fails>
[ ] Monitor after deployment: <what to watch for>
IF automation_events > 100 AND fix_affects_api_access THEN
"CRITICAL" — high-volume automation will break
ELSE IF automation_events > 0 AND fix_affects_api_access THEN
"HIGH" — some automation may break
ELSE IF human_events > 0 AND fix_changes_auth_flow THEN
"MEDIUM" — humans will need to adapt (MFA, new login)
ELSE IF only_suspicious_events THEN
"LOW" — only malicious activity breaks (ideal outcome)
ELSE IF no_events_found THEN
"LOW" — no active usage detected, safe to apply
Compile all findings into a single report. The report MUST start with the executive verdict and end with a clear fix/don't-fix recommendation.
═══════════════════════════════════════════════════════════════════
IMPACT ANALYSIS — <alert-id>
<Alert Title from alert data>
"If I <fix action>, what closes — and what breaks?"
═══════════════════════════════════════════════════════════════════
FIX ACTION: <what needs to be done>
ASSET: <asset name> (<asset type>) in <account>
┌─────────────────────────────────────────────────────────────────┐
│ VERDICT: <FIX NOW / FIX WITH CAUTION / PLAN FIX / DEFER> │
│ │
│ Security gain: <HIGH / MEDIUM / LOW> │
│ Breakage risk: <LOW / MEDIUM / HIGH / CRITICAL> │
│ Blast radius: <X alerts, Y attack paths, Z frameworks> │
│ Confidence: <X%> — <basis for confidence> │
│ │
│ <1-2 sentence executive summary — why fix or why defer> │
└─────────────────────────────────────────────────────────────────┘
───────────────────────────────────────────────────────────────────
REMEDIATION IMPACT SUMMARY
───────────────────────────────────────────────────────────────────
Alerts directly closed: X (including this one)
Risk reduction: X additional alerts improved
Attack paths broken: X
Compliance frameworks: X frameworks improved
Same issue elsewhere: X alerts across Y accounts
Total risk reduction: X critical, Y high, Z medium alerts
Breakage risk: LOW / MEDIUM / HIGH / CRITICAL
───────────────────────────────────────────────────────────────────
ALERTS THAT CLOSE WITH THIS FIX
───────────────────────────────────────────────────────────────────
DIRECT CLOSE (same root cause):
[x] <alert-id> — <title> (score: X.X)
[x] <alert-id> — <title> (score: X.X)
... reason: <why this closes>
RISK REDUCTION (partial improvement):
[~] <alert-id> — <title> (score: X.X)
... reason: <what improves and what remains>
───────────────────────────────────────────────────────────────────
ATTACK PATHS BROKEN
───────────────────────────────────────────────────────────────────
[x] Attack Path #1 (score: X.X)
Story: <attack path narrative>
Role of this alert: <critical node / contributing factor>
[x] Attack Path #2 (score: X.X)
...
───────────────────────────────────────────────────────────────────
COMPLIANCE IMPACT
───────────────────────────────────────────────────────────────────
Frameworks affected: X (list top 5-8 by name)
Rule: <rule-id> — <X alerts> across <Y accounts> would close
COMPLIANCE SCORE CHANGE (if data available):
Framework Current After Fix Change
─────────────────────────────────────────────────────
PCI DSS 4.0.1 87% → 89% +2%
NIST 800-53 91% → 93% +2%
SOC 2 94% → 95% +1%
(If exact scores unavailable, show qualitative impact:)
Resolves violations in X frameworks across Y environment-wide alerts.
───────────────────────────────────────────────────────────────────
SAME ISSUE ACROSS ENVIRONMENT
───────────────────────────────────────────────────────────────────
<alert-type> found on X other assets:
• <account-1>: Y assets (Z critical)
• <account-2>: Y assets (Z critical)
If fixed everywhere: X total alerts closed
───────────────────────────────────────────────────────────────────
BREAKAGE RISK — What might stop working
───────────────────────────────────────────────────────────────────
RISK LEVEL: <LOW / MEDIUM / HIGH / CRITICAL>
ACTIVE USAGE (last 30 days):
Total events: X | Unique actions: Y | User-agents: Z
AUTOMATION THAT MAY BREAK:
[!] <service> — <why> (X events, user-agent: <agent>, pattern: <freq>)
Mitigation: <how to fix>
HUMAN WORKFLOWS AFFECTED:
[~] <workflow> — <impact> | Mitigation: <action>
SUSPICIOUS ACTIVITY (SHOULD break):
[x] <activity> — this is what we're fixing
SAFE DEPLOYMENT CHECKLIST:
[ ] Notify affected teams
[ ] Update automation to handle new auth
[ ] Schedule change window
[ ] Prepare rollback procedure
[ ] Monitor post-deployment
═══════════════════════════════════════════════════════════════════
BOTTOM LINE
═══════════════════════════════════════════════════════════════════
<VERDICT repeated>: <FIX NOW / FIX WITH CAUTION / PLAN FIX / DEFER>
Security gain: <summary of what closes and what improves>
Breakage risk: <summary of what could break>
Net assessment: <clear 1-2 sentence recommendation>
<If FIX NOW>: This is a high-leverage, low-risk fix. Apply immediately.
<If CAUTION>: High security value but some production risk. Follow the
safe deployment checklist above.
<If PLAN FIX>: Worth fixing but needs coordination. Schedule with the
affected teams.
<If DEFER>: Low security gain or high breakage risk. Other fixes
have better ROI. Revisit in <timeframe>.
═══════════════════════════════════════════════════════════════════
| Tool | Purpose | Parameter |
|---|---|---|
get_alert | Understand the alert and fix action | alert_id |
get_asset_related_alerts_summary | All alerts on same asset | asset_id (UUID from Inventory.id) |
get_alert_attack_path_data | Attack paths through this alert | alert_id |
get_alerts_with_similar_alert_type | Same issue across environment | alert_id, alert_type |
| Tool | Purpose | Parameter |
|---|---|---|
search_cdr_events | CloudTrail/audit events for the identity or resource | See CDR parameter reference below |
get_cdr_events_grouped_by_event_name | Aggregate usage patterns by action | See CDR parameter reference below |
get_aws_effective_permissions_policy_on_asset | Current vs. recommended permissions | asset_arn (ARN string) |
| Tool | Purpose | Parameter |
|---|---|---|
get_asset_related_attack_paths_summary | All attack paths on the asset | asset_id (UUID) |
get_control_test_alerts | Compliance control violations | rule_id (from alert's RuleId) |
get_linked_entities_mapping | Connected entities count | asset_id, model_name |
get_linked_entities_data | Detailed linked entity data | asset_id, linked_entity (object) |
get_asset_crown_jewel_info | Crown jewel status | group_unique_id |
discovery_search | Flexible search for related patterns | search_phrase |
get_enabled_compliance_frameworks | All enabled frameworks with current scores | (no params) |
get_compliance_framework_stats_for_asset | Per-asset compliance stats for a framework | framework_id (e.g., "pci_dss_v4.0.1"), group_unique_id |
Both search_cdr_events and get_cdr_events_grouped_by_event_name share these parameters:
| Parameter | Type | Required | Description |
|---|---|---|---|
time_range | enum string | Yes | "last_1_hour", "last_24_hours", "last_3_days", "last_7_days", "last_30_days" |
actors | string array | No | Filter by actor ARNs (e.g., ["arn:aws:iam::123:root"]) |
targets | string array | No | Filter by target resources (e.g., ["arn:aws:s3:::bucket"]) |
services | string array | No | Filter by service (e.g., ["iam.amazonaws.com"]) |
accounts | string array | No | Filter by cloud account IDs (e.g., ["506464807365"]) |
actions | string array | No | Filter by event name (e.g., ["CreateUser", "ConsoleLogin"]) |
source_ip_addresses | string array | No | Filter by source IPs |
countries | string array | No | Filter by country names |
log_types | string array | No | Filter by log type (["CloudTrail", "AzureActivityLog", "GCPAuditLog"]) |
cloud_providers | string array | No | Filter by cloud provider (["aws", "azure", "gcp"]) |
end_time | ISO 8601 string | No | End of time window (time_range counts backwards from this) |
limit | integer (1-100) | No | Max events to return (default: 20, search_cdr_events only) |
CDR event response structure (each event):
{
"eventid": "...",
"eventname": "PutObject",
"eventtimestamp": "2026-04-17T11:39:53",
"eventtype": "AwsApiCall",
"account": "506464807365",
"actor": "s3.amazonaws.com",
"target": "arn:aws:s3:::mybucket",
"sourceipaddress": "10.0.0.1",
"cloud_provider": "aws",
"service": "s3.amazonaws.com",
"log_type": "CloudTrail",
"action_type": "put",
"country_short": "US",
"assumed_role_user": "n/a"
}
Grouped response structure (each group):
{ "eventname": "PutObject", "cou": 4762000 }
Known CDR gotchas:
time_range does NOT accept freeform strings like "30d" — must use the exact enum values["root"] not "root")actors filter matches the actor field in events — for root accounts this is the full ARNactors returns 0 results, try accounts filter instead (some events don't have actor-level granularity)get_asset_related_alerts_summary) are often richer than CDR module events — check both sourcesasset_id for asset tools requires UUID format (e.g., c46cb523-3c5d-...), found in the alert's Inventory.id fieldasset_unique_id format (e.g., AwsUser_506464807365_...) does NOT work with most asset toolsgroup_unique_id for crown jewel check uses the GroupUniqueId field from alert dataget_alerts_with_similar_alert_type requires both alert_id and alert_type as the alert type stringIf Orca MCP servers from .mcp.json are not loaded in the session, fall back to direct HTTP calls:
https://api.orcasecurity.io/mcpAccept: application/json, text/event-stream).mcp.json headersdata: line from the SSE responseCall tools in parallel where possible to minimize latency:
Phase 1 (parallel):
get_alert(alert_id)
Phase 2 (parallel, after Phase 1 — need asset UUID, alert_type, ARN):
# Security improvement queries
get_asset_related_alerts_summary(asset_uuid)
get_alert_attack_path_data(alert_id)
get_alerts_with_similar_alert_type(alert_id, alert_type)
get_asset_related_attack_paths_summary(asset_uuid)
get_control_test_alerts(rule_id) — if RuleId exists
get_asset_crown_jewel_info(group_unique_id) — if GroupUniqueId exists
# Breakage simulation queries (use correct enum values for time_range!)
search_cdr_events(actors=[asset_arn], accounts=[account_id], time_range="last_30_days", limit=100)
get_cdr_events_grouped_by_event_name(actors=[asset_arn], accounts=[account_id], time_range="last_30_days")
search_cdr_events(services=["iam.amazonaws.com"], accounts=[account_id], time_range="last_30_days", limit=50) — for IAM assets
get_aws_effective_permissions_policy_on_asset(asset_arn) — if AWS IAM asset
get_linked_entities_mapping(asset_id, model_name)
# NOTE: If actors filter returns 0 results, fall back to accounts filter.
# Some identities (especially root) may not appear as actors in CDR.
# Also check CloudTrail events embedded in related alerts from Phase 2.
Phase 3 (synthesis):
Classify each related alert (direct close / risk reduction / no impact)
Count attack paths broken
Summarize compliance impact
Analyze CDR events for automation vs human vs suspicious patterns
Compare effective permissions delta with actual usage
Calculate breakage risk level
Generate priority recommendation (balancing security gain vs breakage risk)
Two alerts share the same root cause if ANY of these match:
An alert is a critical node in an attack path if:
An alert is a contributing factor if:
The verdict is the single most important output — it tells the user what to do. Calculate it from the intersection of security gain and breakage risk.
Start: 0
+ Alert severity (critical=3, high=2, medium=1, low=0.5)
+ Alerts directly closed beyond this one (+1 per alert, max +3)
+ Attack paths broken (+1.5 per path, max +3)
+ Compliance frameworks affected (1-5 → +0.5, 6-20 → +1, 21+ → +2)
+ Environment-wide alerts for same rule (1-5 → +0.5, 6+ → +1)
+ Crown jewel asset (+1)
Cap at: 10
Start: 0
+ Automation events that would break (+3 if >100, +2 if >10, +1 if >0)
+ Human workflows disrupted (+1 per distinct workflow, max +3)
+ Fix affects API/programmatic access (+2)
+ Fix requires coordinated change across teams (+2)
+ No rollback path available (+2)
- Only suspicious activity breaks (-2, ideal outcome)
- No active usage detected (-1)
Floor at: 0, Cap at: 10
Breakage Risk
LOW (0-3) MEDIUM (4-6) HIGH (7-10)
Security HIGH FIX NOW FIX WITH FIX WITH
Gain (7-10) CAUTION CAUTION
MED FIX NOW PLAN FIX PLAN FIX
(4-6)
LOW FIX NOW DEFER DEFER
(0-3) (easy win)
Verdict descriptions:
Confidence level (shown in verdict box):
90-100%: All data sources returned results, clear classification
70-89%: Most data available, minor ambiguity in breakage assessment
50-69%: CDR data sparse or effective permissions unavailable
30-49%: Limited data, verdict is best-effort
IF verdict == "FIX NOW" AND security_gain >= 7 THEN
"HIGH-LEVERAGE FIX — this is the type of fix security teams dream about"
ELSE IF verdict == "FIX NOW" AND security_gain < 4 THEN
"EASY WIN — low effort, low risk, just do it"
ELSE IF verdict == "FIX WITH CAUTION" THEN
"WORTH IT — follow the deployment checklist and monitor after"
ELSE IF verdict == "PLAN FIX" AND same_issue_count >= 10 THEN
"SYSTEMIC ISSUE — fix the pattern, not just this instance"
ELSE IF verdict == "DEFER" THEN
"DEPRIORITIZE — better ROI elsewhere. Revisit in <timeframe>"
IF crown_jewel THEN
Append: "Crown jewel asset — elevate priority regardless of cascade count"
If an alert has no related alerts, attack paths, or compliance data:
═══════════════════════════════════════════════════════════════════
IMPACT ANALYSIS — <alert-id>
═══════════════════════════════════════════════════════════════════
FIX ACTION: <what needs to be done>
REMEDIATION IMPACT: TARGETED FIX
This alert is an isolated finding — fixing it closes this single alert.
No related alerts, attack paths, or compliance controls are directly impacted.
Still worth fixing: <severity> alert with Orca Score <score>
═══════════════════════════════════════════════════════════════════
This skill complements the orca-alert-triage skill:
Users can chain them: triage first to understand the alert, then impact analysis to justify prioritization.