From kubeshark
Investigates Kubernetes network incidents with Kubeshark MCP: manages traffic snapshots, extracts PCAPs, dissects L7 API calls from historical captures, compares patterns, detects anomalies.
npx claudepluginhub kubeshark/kubesharkThis skill uses the workspace's default tool permissions.
You are a Kubernetes network forensics specialist. Your job is to help users
Guides Wireshark packet capture, filtering, and analysis from live traffic or PCAP files for security investigations, performance optimization, and network troubleshooting.
Captures and analyzes network packets using Wireshark and tshark to identify malicious traffic, diagnose protocol issues, extract artifacts, and aid incident response investigations.
Share bugs, ideas, or general feedback.
You are a Kubernetes network forensics specialist. Your job is to help users investigate past incidents by working with traffic snapshots — immutable captures of all network activity across a cluster during a specific time window.
Kubeshark is a search engine for network traffic. Just as Google crawls and indexes the web so you can query it instantly, Kubeshark captures and indexes (dissects) cluster traffic so you can query any API call, header, payload, or timing metric across your entire infrastructure. Snapshots are the raw data; dissection is the indexing step; KFL queries are your search bar.
Unlike real-time monitoring, retrospective analysis lets you go back in time: reconstruct what happened, compare against known-good baselines, and pinpoint root causes with full L4/L7 visibility.
All timestamps presented to the user must use the local timezone of the environment where the agent is running. Users think in local time ("this happened around 3pm"), and UTC-only output adds friction during incident response when speed matters.
date +%Z or equivalent) to determine the timezone.15:03:22 IST (12:03:22 UTC).When creating snapshots, Kubeshark MCP tools accept UTC timestamps. Convert the user's
local time references to UTC before passing them to tools like create_snapshot or
export_snapshot_pcap. Confirm the converted window with the user if there's any
ambiguity.
Before starting any analysis, verify the environment is ready.
Confirm the Kubeshark MCP is accessible and tools are available. Look for tools
like list_api_calls, list_l4_flows, create_snapshot, etc.
Tool: check_kubeshark_status
If tools like list_api_calls or list_l4_flows are missing from the response,
something is wrong with the MCP connection. Guide the user through setup
(see Setup Reference at the bottom).
Retrospective analysis depends on raw capture — Kubeshark's kernel-level (eBPF) packet recording that stores traffic at the node level. Without it, snapshots have nothing to work with.
Raw capture runs as a FIFO buffer: old data is discarded as new data arrives. The buffer size determines how far back you can go. Larger buffer = wider snapshot window.
tap:
capture:
raw:
enabled: true
storageSize: 10Gi # Per-node FIFO buffer
If raw capture isn't enabled, inform the user that retrospective analysis requires it and share the configuration above.
Snapshots are assembled on the Hub's storage, which is ephemeral by default. For serious forensic work, persistent storage is recommended:
tap:
snapshots:
local:
storageClass: gp2
storageSize: 1000Gi
Every investigation starts with a snapshot. After that, you choose one of two investigation routes depending on your goal:
get_data_boundaries
to see what raw capture data is available.list_snapshots.| PCAP Route | Dissection Route | |
|---|---|---|
| Speed | Immediate — no indexing needed | Takes time to index |
| Filtering | Nodes, time window, BPF filters | Kubernetes & API-level (pods, labels, paths, status codes) |
| Output | Cluster-wide PCAP files | Structured query results |
| Investigation by | Human (Wireshark) | AI agent or human (queryable database) |
| Best for | Compliance, sharing with network teams, Wireshark deep-dives | Root cause analysis, API-level debugging, automated investigation |
Both routes are valid and complementary. Use PCAP when you need raw packets for human analysis or compliance. Use Dissection when you want an AI agent to search and analyze traffic programmatically.
Default to Dissection. Unless the user explicitly asks for a PCAP file or Wireshark export, assume Dissection is needed. Any question about workloads, APIs, services, pods, error rates, latency, or traffic patterns requires dissected data.
Both routes start here. A snapshot is an immutable freeze of all cluster traffic in a time window.
Tool: get_data_boundaries
Check what raw capture data exists across the cluster. You can only create snapshots within these boundaries — data outside the window has been rotated out of the FIFO buffer.
Example response (raw tool output is in UTC — convert to local time before presenting):
Cluster-wide:
Oldest: 2026-03-14 18:12:34 IST (16:12:34 UTC)
Newest: 2026-03-14 20:05:20 IST (18:05:20 UTC)
Per node:
┌─────────────────────────────┬───────────────────────────────┬───────────────────────────────┐
│ Node │ Oldest │ Newest │
├─────────────────────────────┼───────────────────────────────┼───────────────────────────────┤
│ ip-10-0-25-170.ec2.internal │ 18:12:34 IST (16:12:34 UTC) │ 20:03:39 IST (18:03:39 UTC) │
│ ip-10-0-32-115.ec2.internal │ 18:13:45 IST (16:13:45 UTC) │ 20:05:20 IST (18:05:20 UTC) │
└─────────────────────────────┴───────────────────────────────┴───────────────────────────────┘
If the incident falls outside the available window, the data has been rotated
out. Suggest increasing storageSize for future coverage.
Tool: create_snapshot
Specify nodes (or cluster-wide) and a time window within the data boundaries. Snapshots include raw capture files, Kubernetes pod events, and eBPF cgroup events.
Snapshots take time to build. Check status with get_snapshot — wait until
completed before proceeding with either route.
Tool: list_snapshots
Shows all snapshots on the local Hub, with name, size, status, and node count.
Snapshots on the Hub are ephemeral. Cloud storage (S3, GCS, Azure Blob) provides long-term retention. Snapshots can be downloaded to any cluster with Kubeshark — not necessarily the original one.
Check cloud status: get_cloud_storage_status
Upload to cloud: upload_snapshot_to_cloud
Download from cloud: download_snapshot_from_cloud
The PCAP route does not require dissection. It works directly with the raw snapshot data to produce filtered, cluster-wide PCAP files. Use this route when:
Tool: export_snapshot_pcap
Filter the snapshot down to what matters using:
host 10.0.53.101,
port 8080, net 10.0.0.0/16)These filters are combinable — select specific nodes, narrow the time range, and apply a BPF expression all at once.
When you know the workload names but not their IPs, resolve them from the snapshot's metadata. Snapshots preserve pod-to-IP mappings from capture time, so resolution is accurate even if pods have been rescheduled since.
Tool: list_workloads
Use list_workloads with name + namespace for a singular lookup (works
live and against snapshots), or with snapshot_id + filters for a broader
scan.
Example workflow — singular lookup — extract PCAP for specific workloads:
list_workloads with name: "orders-594487879c-7ddxf", namespace: "prod" → IPs: ["10.0.53.101"]list_workloads with name: "payment-service-6b8f9d-x2k4p", namespace: "prod" → IPs: ["10.0.53.205"]host 10.0.53.101 or host 10.0.53.205export_snapshot_pcap with that BPF filterExample workflow — filtered scan — extract PCAP for all workloads matching a pattern in a snapshot:
list_workloads with snapshot_id, namespaces: ["prod"],
name_regex: "payment.*" → returns all matching workloads with their IPshost 10.0.53.205 or host 10.0.53.210 or ...export_snapshot_pcap with that BPF filterThis gives you a cluster-wide PCAP filtered to exactly the workloads involved in the incident — ready for Wireshark or long-term storage.
When you have an IP address (e.g., from a PCAP or L4 flow) and need to identify the workload behind it:
Tool: list_ips
Use list_ips with ip for a singular lookup (works live and against
snapshots), or with snapshot_id + filters for a broader scan.
Example — singular lookup: list_ips with ip: "10.0.53.101",
snapshot_id: "snap-abc" → returns pod/service identity for that IP.
Example — filtered scan: list_ips with snapshot_id: "snap-abc",
namespaces: ["prod"], labels: {"app": "payment"} → returns all IPs
associated with workloads matching those filters.
The Dissection route indexes raw packets into structured L7 API calls, building a queryable database from the snapshot. Use this route when:
KFL requirement: The Dissection route uses KFL filters for all queries
(list_api_calls, get_api_stats, etc.). Before constructing any KFL filter,
load the KFL skill (skills/kfl/). KFL is statically typed — incorrect field
names or syntax will fail silently or error. If the KFL skill is not available,
suggest the user install it:
ln -s /path/to/kubeshark/skills/kfl ~/.claude/skills/kfl
If the KFL skill cannot be loaded, only use the exact filter examples shown
in this skill. Do not improvise or guess at field names, operators, or syntax.
KFL field names differ from what you might expect (e.g., status_code not
response.status, src.pod.namespace not src.namespace). Using incorrect
fields produces wrong results without warning.
Any question about workloads, Kubernetes resources, services, pods, namespaces, or API calls requires dissection. Only the PCAP route works without it. If the user asks anything about traffic content, API behavior, error rates, latency, or service-to-service communication, you must ensure dissection is active before attempting to answer.
Do not wait for dissection to complete on its own — it will not start by itself.
Follow this sequence every time before using list_api_calls, get_api_call,
or get_api_stats:
get_snapshot_dissection_status (or list_snapshot_dissections)
to see if a dissection already exists for this snapshot.start_snapshot_dissection to
trigger it. Then monitor progress with get_snapshot_dissection_status until
it completes.Never assume dissection is running. Never wait for a dissection that was not started. The agent is responsible for triggering dissection when it is missing.
Tool: start_snapshot_dissection
Dissection takes time proportional to snapshot size — it parses every packet, reassembles streams, and builds the index. After completion, these tools become available:
list_api_calls — Search API transactions with KFL filtersget_api_call — Drill into a specific call (headers, body, timing, payload)get_api_stats — Aggregated statistics (throughput, error rates, latency)Every user prompt that involves APIs, workloads, services, pods, namespaces,
or Kubernetes semantics should translate into a list_api_calls call with an
appropriate KFL filter. Do not answer from memory or prior results — always
run a fresh query that matches what the user is asking.
Examples of user prompts and the queries they should trigger:
| User says | Action |
|---|---|
| "Show me all 500 errors" | list_api_calls with KFL: http && status_code == 500 |
| "What's hitting the payment service?" | list_api_calls with KFL: dst.service.name == "payment-service" |
| "Any DNS failures?" | list_api_calls with KFL: dns && status_code != 0 |
| "Show traffic from namespace prod to staging" | list_api_calls with KFL: src.pod.namespace == "prod" && dst.pod.namespace == "staging" |
| "What are the slowest API calls?" | list_api_calls with KFL: http && elapsed_time > 5000000 |
The user's natural language maps to KFL. Your job is to translate intent into the right filter and run the query — don't summarize old results or speculate without fresh data.
Start broad, then narrow:
get_api_stats — Get the overall picture: error rates, latency percentiles,
throughput. Look for spikes or anomalies.list_api_calls filtered by error codes (4xx, 5xx) or high latency — find
the problematic transactions.get_api_call on specific calls — inspect headers, bodies, timing, and
full payload to understand what went wrong.Example list_api_calls response (filtered to http && status_code >= 500,
timestamps converted from UTC to local):
┌──────────────────────────────────────────┬────────┬──────────────────────────┬────────┬───────────┐
│ Timestamp │ Method │ URL │ Status │ Elapsed │
├──────────────────────────────────────────┼────────┼──────────────────────────┼────────┼───────────┤
│ 2026-03-14 19:23:45 IST (17:23:45 UTC) │ POST │ /api/v1/orders/charge │ 503 │ 12,340 ms │
│ 2026-03-14 19:23:46 IST (17:23:46 UTC) │ POST │ /api/v1/orders/charge │ 503 │ 11,890 ms │
│ 2026-03-14 19:23:48 IST (17:23:48 UTC) │ GET │ /api/v1/inventory/check │ 500 │ 8,210 ms │
│ 2026-03-14 19:24:01 IST (17:24:01 UTC) │ POST │ /api/v1/payments/process │ 502 │ 30,000 ms │
└──────────────────────────────────────────┴────────┴──────────────────────────┴────────┴───────────┘
Src: api-gateway (prod) → Dst: payment-service (prod)
Use the pattern of repeated failures and high latency to identify the failing
service chain, then drill into individual calls with get_api_call.
Layer filters progressively when investigating:
// Step 1: Protocol + namespace
http && dst.pod.namespace == "production"
// Step 2: Add error condition
http && dst.pod.namespace == "production" && status_code >= 500
// Step 3: Narrow to service
http && dst.pod.namespace == "production" && status_code >= 500 && dst.service.name == "payment-service"
// Step 4: Narrow to endpoint
http && dst.pod.namespace == "production" && status_code >= 500 && dst.service.name == "payment-service" && path.contains("/charge")
Other common RCA filters:
dns && dns_response && status_code != 0 // Failed DNS lookups
src.service.namespace != dst.service.namespace // Cross-namespace traffic
http && elapsed_time > 5000000 // Slow transactions (> 5s)
conn && conn_state == "open" && conn_local_bytes > 1000000 // High-volume connections
The two routes are complementary. A common pattern:
list_workloads
to get their IPs (singular lookup by name+namespace, or filtered scan
by namespace/regex/labels against the snapshot)get_data_boundaries — is the window still in raw capture?create_snapshot covering the incident window (add 15 minutes buffer)start_snapshot_dissection → get_api_stats →
list_api_calls → get_api_call → follow the dependency chainlist_workloads → export_snapshot_pcap with BPF →
hand off to Wireshark or archiveget_api_stats across them to detect latency drift, error rate changes,
or new service-to-service connections.create_snapshot + upload_snapshot_to_cloud
for immutable, long-term evidence. Downloadable to any cluster months later.For CLI installation, MCP configuration, verification, and troubleshooting,
see references/setup.md.