From splunk
Search and analyze Splunk logs with natural language query building and SPL execution
npx claudepluginhub infiquetra/infiquetra-claude-plugins --plugin splunkThis skill uses the workspace's default tool permissions.
You are helping the user search and analyze Splunk logs through natural language interactions.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Automates semantic versioning and release workflow for Claude Code plugins: bumps versions in package.json, marketplace.json, plugin.json; verifies builds; creates git tags, GitHub releases, changelogs.
You are helping the user search and analyze Splunk logs through natural language interactions.
Verify environment variables are set:
echo $SPLUNK_TOKEN
echo $SPLUNK_HOST
If not set:
SPLUNK_TOKEN: Get from Splunk Settings → TokensSPLUNK_HOST: Splunk hostname (e.g., splunk.example.com)Most common operation - submit search and wait for results:
python plugins/splunk/skills/splunk-search/scripts/splunk_client.py search execute \
--query 'index=prod service=wallet-service level=ERROR' \
--earliest-time '-1h' \
--timeout 60
Parameters:
--query: SPL search query (will prepend 'search' if missing)--earliest-time: Start time (default: -1h) - supports relative times--latest-time: End time (default: now)--max-count: Max results (default: 100)--timeout: Max wait seconds (default: 60)For long-running searches:
1. Submit search job:
python splunk_client.py search submit \
--query 'index=prod service=wallet-service | stats count by status_code'
Returns: {"success": true, "data": {"sid": "1234567890.12345"}}
2. Poll job status:
python splunk_client.py search poll --job-id 1234567890.12345
Returns job progress and status
3. Get results:
python splunk_client.py search results --job-id 1234567890.12345
List apps:
python splunk_client.py apps list
List indexes:
python splunk_client.py indexes list
User: "Show me errors from wallet-service in the last hour"
Response:
index=prod service=wallet-service level=ERROR-1h to nowUser: "Find all 500 errors from the API in the last 24 hours"
Response:
python splunk_client.py search execute \
--query 'index=prod source="api*" status_code=500' \
--earliest-time '-24h'
User: "Check logs around the time of incident PXXXXX (14:30)"
Response:
User: "What errors are most common in the checkout service?"
Response:
python splunk_client.py search execute \
--query 'index=prod service=checkout-service level=ERROR | stats count by error_message | sort -count'
User: "Show API response times over 2 seconds in the last hour"
Response:
python splunk_client.py search execute \
--query 'index=prod source="api*" response_time>2000 | stats avg(response_time) by endpoint'
Convert natural language to SPL:
| Natural Language | SPL Query |
|---|---|
| "errors from wallet-service" | index=prod service=wallet-service level=ERROR |
| "500 errors from API" | index=prod source="api*" status_code=500 |
| "database timeouts" | `index=prod error_type=timeout |
| "high memory usage" | `index=metrics metric_name=memory_usage |
| "requests per minute" | `index=prod |
Basic search:
index=prod service=wallet-service level=ERROR
With time range:
index=prod earliest=-1h latest=now
Field extraction:
index=prod | rex field=message "error_code=(?<code>\d+)"
Aggregation:
index=prod | stats count by service, level
Filtering:
index=prod | where response_time > 1000
Sorting:
index=prod | stats count by error_message | sort -count | head 10
After incident acknowledgment:
Incident PXXXXX acknowledged (wallet-service, 14:32)
Check Splunk logs?
1. Errors in 5-minute window (14:27-14:37)
2. Stack traces and exceptions
3. Request volume patterns
4. Related service errors
Suggested query:
index=prod service=wallet-service earliest="2026-02-26T14:27:00" latest="2026-02-26T14:37:00" level=ERROR
When creating defects:
Rally defect DE12345: Database timeout in wallet-service
Splunk evidence:
- 143 timeout errors in 10-minute window
- Query: SELECT * FROM transactions WHERE...
- Stack trace: [link to Splunk search]
High CPU in wallet-service (CloudWatch shows 85%)
Splunk log analysis:
- 234 slow queries (>1s) in last hour
- Pattern: Large transaction batch processing
- Recommendation: Optimize query or add batching
When user describes an issue:
User: "The checkout is slow"
Suggested Splunk queries:
1. Response times: index=prod service=checkout | stats avg(response_time) by endpoint
2. Error rates: index=prod service=checkout level=ERROR | timechart count
3. External calls: index=prod service=checkout | search *timeout* OR *slow*
4. Database queries: index=prod service=checkout source=*db* | stats avg(duration_ms)
{
"pattern_detected": "Database Connection Pool Exhaustion",
"evidence": [
"142 'connection timeout' errors",
"Peak at 14:32 (matches incident trigger)",
"Error message: 'Unable to acquire connection from pool'"
],
"related_searches": [
"Check connection pool metrics",
"Review database CPU usage",
"Find concurrent request count"
]
}
User input: "Show me yesterday's errors sorted by frequency"
Translation:
index=prod earliest=-1d@d latest=@d level=ERROR
| stats count by error_message
| sort -count
Explanation:
earliest=-1d@d: Start of yesterdaylatest=@d: Start of today (end of yesterday)stats count by error_message: Group and count by errorsort -count: Sort descending by countSplunk supports flexible time ranges:
| Expression | Meaning |
|---|---|
-1h | Last hour |
-24h | Last 24 hours |
-7d | Last 7 days |
-1d@d | Start of yesterday |
@d | Start of today |
@w0 | Start of this week (Sunday) |
@mon | Start of this month |
2026-02-26T14:30:00 | Absolute time (ISO 8601) |
index=prod faster than searching all indexeshead or --max-count to avoid large result setsGood:
index=prod service=wallet-service earliest=-1h | stats count by level
Bad:
* | search service=wallet-service | stats count by level
1. Missing credentials:
{
"error": true,
"message": "SPLUNK_TOKEN environment variable not set"
}
Solution: Set SPLUNK_TOKEN
2. Search timeout:
{
"error": true,
"message": "Search timeout after 60 seconds. Job still running (SID: 1234567890.12345)",
"sid": "1234567890.12345"
}
Solution: Use longer timeout or check job manually with poll/results
3. Invalid SPL syntax:
{
"error": true,
"message": "Error in 'stats' command: The argument 'by' is invalid.",
"status_code": 400
}
Solution: Fix SPL syntax
All commands return JSON:
{
"success": true,
"data": [
{
"_time": "2026-02-26T14:32:15.000Z",
"service": "wallet-service",
"level": "ERROR",
"message": "Database connection timeout"
}
],
"count": 1,
"elapsed_seconds": 2.34
}
See references/ directory for:
splunk-api.md: Complete Splunk REST API referencespl-reference.md: SPL query language guide