Queries OpenSearch logs using PPL for severity filtering, trace correlation, error patterns, and volume analysis in OTEL indices.
npx claudepluginhub opensearch-project/observability-stack --plugin observabilityThis skill is limited to using the following tools:
This skill provides PPL (Piped Processing Language) query templates for searching and analyzing log data stored in OpenSearch. Logs are stored in the `logs-otel-v1-*` index pattern. All queries use the OpenSearch PPL API at `/_plugins/_ppl` with HTTPS and basic authentication.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
This skill provides PPL (Piped Processing Language) query templates for searching and analyzing log data stored in OpenSearch. Logs are stored in the logs-otel-v1-* index pattern. All queries use the OpenSearch PPL API at /_plugins/_ppl with HTTPS and basic authentication.
Credentials are read from the .env file (default: admin / My_password_123!@#). All curl commands use -k to skip TLS certificate verification for local development.
All commands below use these variables. Set them in your environment or use the defaults:
| Variable | Default | Description |
|---|---|---|
OPENSEARCH_ENDPOINT | https://localhost:9200 | OpenSearch base URL |
OPENSEARCH_USER | admin | OpenSearch username |
OPENSEARCH_PASSWORD | My_password_123!@# | OpenSearch password |
All PPL queries in this skill use this curl pattern:
curl -sk -u "$OPENSEARCH_USER:$OPENSEARCH_PASSWORD" \
-X POST "$OPENSEARCH_ENDPOINT/_plugins/_ppl" \
-H 'Content-Type: application/json' \
-d '{"query": "<PPL_QUERY>"}'
The examples below show the full command for clarity, but only the PPL query varies.
Key fields available in the logs-otel-v1-* index:
| Field | Type | Description |
|---|---|---|
severityText | keyword | Log level string (ERROR, WARN, INFO, DEBUG) |
severityNumber | integer | Numeric severity (1–24, higher = more severe; ERROR=17, WARN=13, INFO=9, DEBUG=5) |
traceId | keyword | Correlated trace identifier (links log to a distributed trace) |
spanId | keyword | Correlated span identifier (links log to a specific span within a trace) |
resource.attributes.service.name | keyword | Service that produced the log entry (use backtick-quoted `resource.attributes.service.name` in PPL queries) |
body | text | Log message body content |
@timestamp | date | Log entry timestamp |
Note: Unlike the trace span index (
otel-v1-apm-span-*) which has a top-levelserviceNamefield, the log index stores the service name atresource.attributes.service.name. Always use backtick quoting in PPL:`resource.attributes.service.name`.
Query all error-level logs:
curl -sk -u "$OPENSEARCH_USER:$OPENSEARCH_PASSWORD" \
-X POST "$OPENSEARCH_ENDPOINT/_plugins/_ppl" \
-H 'Content-Type: application/json' \
-d '{"query": "source=logs-otel-v1-* | where severityText = '\''ERROR'\'' | fields traceId, spanId, `resource.attributes.service.name`, body, `@timestamp` | sort - `@timestamp` | head 20"}'
curl -sk -u "$OPENSEARCH_USER:$OPENSEARCH_PASSWORD" \
-X POST "$OPENSEARCH_ENDPOINT/_plugins/_ppl" \
-H 'Content-Type: application/json' \
-d '{"query": "source=logs-otel-v1-* | where severityText = '\''WARN'\'' | fields traceId, spanId, `resource.attributes.service.name`, body, `@timestamp` | sort - `@timestamp` | head 20"}'
curl -sk -u "$OPENSEARCH_USER:$OPENSEARCH_PASSWORD" \
-X POST "$OPENSEARCH_ENDPOINT/_plugins/_ppl" \
-H 'Content-Type: application/json' \
-d '{"query": "source=logs-otel-v1-* | where severityText = '\''INFO'\'' | fields traceId, spanId, `resource.attributes.service.name`, body, `@timestamp` | sort - `@timestamp` | head 20"}'
Use severityNumber for numeric comparisons. For example, find all logs at WARN level or above (severityNumber >= 13):
curl -sk -u "$OPENSEARCH_USER:$OPENSEARCH_PASSWORD" \
-X POST "$OPENSEARCH_ENDPOINT/_plugins/_ppl" \
-H 'Content-Type: application/json' \
-d '{"query": "source=logs-otel-v1-* | where severityNumber >= 13 | fields severityText, severityNumber, `resource.attributes.service.name`, body, `@timestamp` | sort - `@timestamp` | head 20"}'
Find all logs associated with a specific trace:
curl -sk -u "$OPENSEARCH_USER:$OPENSEARCH_PASSWORD" \
-X POST "$OPENSEARCH_ENDPOINT/_plugins/_ppl" \
-H 'Content-Type: application/json' \
-d '{"query": "source=logs-otel-v1-* | where traceId = '\''<TRACE_ID>'\'' | fields traceId, spanId, severityText, body, `resource.attributes.service.name`, `@timestamp` | sort `@timestamp`"}'
Find error logs for a specific trace:
curl -sk -u "$OPENSEARCH_USER:$OPENSEARCH_PASSWORD" \
-X POST "$OPENSEARCH_ENDPOINT/_plugins/_ppl" \
-H 'Content-Type: application/json' \
-d '{"query": "source=logs-otel-v1-* | where traceId = '\''<TRACE_ID>'\'' AND severityText = '\''ERROR'\'' | fields spanId, severityText, body, `resource.attributes.service.name`, `@timestamp` | sort `@timestamp`"}'
Identify error patterns by aggregating log counts grouped by severity level and service name:
curl -sk -u "$OPENSEARCH_USER:$OPENSEARCH_PASSWORD" \
-X POST "$OPENSEARCH_ENDPOINT/_plugins/_ppl" \
-H 'Content-Type: application/json' \
-d '{"query": "source=logs-otel-v1-* | stats count() by severityText, `resource.attributes.service.name`"}'
curl -sk -u "$OPENSEARCH_USER:$OPENSEARCH_PASSWORD" \
-X POST "$OPENSEARCH_ENDPOINT/_plugins/_ppl" \
-H 'Content-Type: application/json' \
-d '{"query": "source=logs-otel-v1-* | where severityText = '\''ERROR'\'' | stats count() as error_count by `resource.attributes.service.name` | sort - error_count"}'
Analyze log volume over time using stats count() by span(@timestamp, 1h):
curl -sk -u "$OPENSEARCH_USER:$OPENSEARCH_PASSWORD" \
-X POST "$OPENSEARCH_ENDPOINT/_plugins/_ppl" \
-H 'Content-Type: application/json' \
-d '{"query": "source=logs-otel-v1-* | stats count() as log_count by span(`@timestamp`, 1h)"}'
Change the interval to suit your analysis. Common intervals: 5m, 15m, 1h, 1d.
15-minute buckets:
curl -sk -u "$OPENSEARCH_USER:$OPENSEARCH_PASSWORD" \
-X POST "$OPENSEARCH_ENDPOINT/_plugins/_ppl" \
-H 'Content-Type: application/json' \
-d '{"query": "source=logs-otel-v1-* | stats count() as log_count by span(`@timestamp`, 15m)"}'
Track error log volume specifically:
curl -sk -u "$OPENSEARCH_USER:$OPENSEARCH_PASSWORD" \
-X POST "$OPENSEARCH_ENDPOINT/_plugins/_ppl" \
-H 'Content-Type: application/json' \
-d '{"query": "source=logs-otel-v1-* | where severityText = '\''ERROR'\'' | stats count() as error_count by span(`@timestamp`, 1h), `resource.attributes.service.name`"}'
Search log body content for a specific string using where with like:
curl -sk -u "$OPENSEARCH_USER:$OPENSEARCH_PASSWORD" \
-X POST "$OPENSEARCH_ENDPOINT/_plugins/_ppl" \
-H 'Content-Type: application/json' \
-d '{"query": "source=logs-otel-v1-* | where body like '\''%timeout%'\'' | fields traceId, spanId, severityText, body, `resource.attributes.service.name`, `@timestamp` | sort - `@timestamp` | head 20"}'
Use the match relevance function for full-text search on the body field:
curl -sk -u "$OPENSEARCH_USER:$OPENSEARCH_PASSWORD" \
-X POST "$OPENSEARCH_ENDPOINT/_plugins/_ppl" \
-H 'Content-Type: application/json' \
-d '{"query": "source=logs-otel-v1-* | where match(body, '\''connection refused'\'') | fields traceId, spanId, severityText, body, `resource.attributes.service.name`, `@timestamp` | sort - `@timestamp` | head 20"}'
Use match_phrase for exact phrase matching in the body:
curl -sk -u "$OPENSEARCH_USER:$OPENSEARCH_PASSWORD" \
-X POST "$OPENSEARCH_ENDPOINT/_plugins/_ppl" \
-H 'Content-Type: application/json' \
-d '{"query": "source=logs-otel-v1-* | where match_phrase(body, '\''failed to connect'\'') | fields traceId, spanId, severityText, body, `resource.attributes.service.name`, `@timestamp` | sort - `@timestamp` | head 20"}'
Find all logs associated with a specific span to understand what happened during that operation:
curl -sk -u "$OPENSEARCH_USER:$OPENSEARCH_PASSWORD" \
-X POST "$OPENSEARCH_ENDPOINT/_plugins/_ppl" \
-H 'Content-Type: application/json' \
-d '{"query": "source=logs-otel-v1-* | where spanId = '\''<SPAN_ID>'\'' | fields traceId, spanId, severityText, body, `resource.attributes.service.name`, `@timestamp` | sort `@timestamp`"}'
Find error logs and their associated trace spans. First, find error logs with traceId:
curl -sk -u "$OPENSEARCH_USER:$OPENSEARCH_PASSWORD" \
-X POST "$OPENSEARCH_ENDPOINT/_plugins/_ppl" \
-H 'Content-Type: application/json' \
-d '{"query": "source=logs-otel-v1-* | where severityText = '\''ERROR'\'' AND traceId != '\'''\'' | fields traceId, spanId, body, `resource.attributes.service.name`, `@timestamp` | sort - `@timestamp` | head 20"}'
Then query the trace index for the corresponding spans using the traceId from the error log:
curl -sk -u "$OPENSEARCH_USER:$OPENSEARCH_PASSWORD" \
-X POST "$OPENSEARCH_ENDPOINT/_plugins/_ppl" \
-H 'Content-Type: application/json' \
-d '{"query": "source=otel-v1-apm-span-* | where traceId = '\''<TRACE_ID>'\'' | fields traceId, spanId, serviceName, name, `status.code`, durationInNanos, startTime | sort startTime"}'
Correlate exception spans with their associated error logs using shared traceId and spanId:
curl -sk -u "$OPENSEARCH_USER:$OPENSEARCH_PASSWORD" \
-X POST "$OPENSEARCH_ENDPOINT/_plugins/_ppl" \
-H 'Content-Type: application/json' \
-d '{"query": "source=logs-otel-v1-* | where traceId = '\''<TRACE_ID>'\'' AND spanId = '\''<SPAN_ID>'\'' AND severityText = '\''ERROR'\'' | fields body, severityText, `@timestamp`"}'
The following PPL commands are particularly useful when analyzing log data:
| Command | Use Case |
|---|---|
stats | Aggregate log counts by severity, service, or time bucket |
where | Filter logs by severity level, traceId, spanId, service, or body content |
fields | Select specific fields to return (body, severityText, traceId, etc.) |
sort | Order results by timestamp or severity |
head | Limit result count for quick exploration |
grok | Extract structured fields from unstructured log body text using grok patterns |
parse | Parse log body content using regex patterns to extract fields |
rex | Extract fields from text using named capture groups |
patterns | Discover common log message patterns automatically |
rare | Find the least frequent log messages or error types |
top | Find the most frequent log messages, services, or severity levels |
timechart | Visualize log volume or error counts over time buckets |
eval | Compute derived fields (e.g., classify severity ranges) |
dedup | Remove duplicate log entries (e.g., deduplicate by body to find unique messages) |
fillnull | Replace null field values with defaults for cleaner output |
regex | Filter logs using regular expression patterns on field values |
To query logs on Amazon OpenSearch Service, replace the local endpoint and authentication with AWS SigV4:
curl -s --aws-sigv4 "aws:amz:REGION:es" \
--user "$AWS_ACCESS_KEY_ID:$AWS_SECRET_ACCESS_KEY" \
-X POST https://DOMAIN-ID.REGION.es.amazonaws.com/_plugins/_ppl \
-H 'Content-Type: application/json' \
-d '{"query": "source=logs-otel-v1-* | where severityText = '\''ERROR'\'' | fields traceId, spanId, `resource.attributes.service.name`, body, `@timestamp` | sort - `@timestamp` | head 20"}'
https://DOMAIN-ID.REGION.es.amazonaws.com--aws-sigv4 "aws:amz:REGION:es" with --user "$AWS_ACCESS_KEY_ID:$AWS_SECRET_ACCESS_KEY"/_plugins/_ppl) and query syntax are identical to the local stack-k flag needed — AWS managed endpoints use valid TLS certificates