From esql
This skill should be used when the user asks to "write an ES|QL query", "query Elasticsearch with ES|QL", "convert SQL to ES|QL", "filter Elasticsearch data", "aggregate data in ES|QL", "use STATS in ES|QL", mentions ES|QL commands (FROM, WHERE, KEEP, STATS, EVAL, SORT, LIMIT), or needs help with Elasticsearch piped query language syntax.
npx claudepluginhub pixelsquared/claude-skills --plugin esqlThis skill uses the workspace's default tool permissions.
ES|QL (Elasticsearch Query Language) is a piped query language for filtering, transforming, and analyzing data in Elasticsearch. It uses a pipe-based syntax where commands chain together to process data step-by-step.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
ES|QL (Elasticsearch Query Language) is a piped query language for filtering, transforming, and analyzing data in Elasticsearch. It uses a pipe-based syntax where commands chain together to process data step-by-step.
Every ES|QL query follows this pattern:
source-command
| processing-command1
| processing-command2
| ...
The result is the table produced by the final processing command.
Every query must start with a source command:
| Command | Purpose | Syntax |
|---|---|---|
FROM | Query indices/data streams | FROM index_pattern [METADATA fields] |
ROW | Create literal rows | ROW col1 = val1, col2 = val2 |
SHOW | Display system info | SHOW INFO |
FROM logs-*
FROM metrics-2024.01.*
FROM logs-* METADATA _index, _id
FROM cluster_one:logs-*, cluster_two:logs-*
Filter rows based on conditions:
FROM logs-*
| WHERE status_code >= 400
| WHERE host.name == "server1" AND @timestamp > NOW() - 1 day
Supported operators: ==, !=, <, >, <=, >=, AND, OR, NOT, IN, LIKE, RLIKE, IS NULL, IS NOT NULL
Specify which columns to return and their order:
FROM logs-*
| KEEP @timestamp, host.name, message
| KEEP host.*, @timestamp
Remove unwanted columns:
FROM logs-*
| DROP agent.*, ecs.*
FROM logs-*
| RENAME old_name AS new_name
| RENAME source.ip AS src_ip, destination.ip AS dst_ip
FROM logs-*
| SORT @timestamp DESC
| SORT status_code ASC, response_time DESC NULLS LAST
FROM logs-*
| LIMIT 100
Default limit is 1000 rows. Maximum is 10,000.
Group and aggregate data:
FROM logs-*
| STATS count = COUNT(*) BY host.name
| STATS avg_response = AVG(response_time), max_response = MAX(response_time) BY service.name
| STATS requests = COUNT(*) WHERE status_code < 400, errors = COUNT(*) WHERE status_code >= 400 BY host.name
| Function | Description |
|---|---|
COUNT(*), COUNT(field) | Count rows/values |
COUNT_DISTINCT(field) | Approximate distinct count |
SUM(number) | Sum values |
AVG(number) | Average value |
MIN(field), MAX(field) | Minimum/maximum |
MEDIAN(number) | 50th percentile |
PERCENTILE(number, pct) | Value at percentile |
STD_DEV(number) | Standard deviation |
VALUES(field) | Collect all values |
TOP(field, limit, order) | Top N values |
Use BUCKET for time-series aggregations:
FROM logs-*
| STATS count = COUNT(*) BY bucket = BUCKET(@timestamp, 1 hour)
| STATS avg_cpu = AVG(cpu.usage) BY BUCKET(@timestamp, 5 minutes), host.name
Create new columns with expressions:
FROM logs-*
| EVAL response_sec = response_time / 1000
| EVAL status_category = CASE(
status_code < 300, "success",
status_code < 400, "redirect",
status_code < 500, "client_error",
"server_error"
)
| EVAL full_name = CONCAT(first_name, " ", last_name)
FROM logs-*
| DISSECT message "%{client_ip} - %{user} [%{timestamp}] \"%{method} %{path}\""
FROM logs-*
| GROK message "%{IP:client_ip} - %{USER:user} \\[%{HTTPDATE:timestamp}\\]"
Add data from lookup indices:
FROM logs-*
| ENRICH geo_lookup ON client.ip WITH country, city
| ENRICH user_info ON user.id WITH department, manager
Requires pre-configured enrich policies.
FROM events
| LOOKUP JOIN users ON user_id
FROM orders
| LEFT JOIN customers ON orders.customer_id == customers.id
FROM logs-*
| WHERE @timestamp > NOW() - 24 hours
| STATS count = COUNT(*) BY host.name
| SORT count DESC
| LIMIT 10
FROM logs-*
| WHERE @timestamp > NOW() - 1 hour
| STATS
total = COUNT(*),
errors = COUNT(*) WHERE status_code >= 500
BY service.name
| EVAL error_rate = errors / total * 100
| SORT error_rate DESC
FROM metrics-*
| WHERE @timestamp > NOW() - 6 hours
| STATS
p50 = PERCENTILE(response_time, 50),
p95 = PERCENTILE(response_time, 95),
p99 = PERCENTILE(response_time, 99)
BY BUCKET(@timestamp, 5 minutes)
| SORT bucket
"hello world"\n, \t, \""""contains "quotes" inside"""42, -173.14, .51.2e-3, 4E5"2024-01-15T10:30:00Z"NOW() - 1 day, NOW() - 2 hours1day, 2 hours, 30 minutes// Single line comment
/* Multi-line
comment */
FROM logs-*
| WHERE MATCH(message, "error timeout")
| WHERE MATCH_PHRASE(message, "connection refused")
| WHERE KQL("message: error AND host.name: server*")
| WHERE QSTR("message:error")
Common conversion functions:
| EVAL str_val = TO_STRING(numeric_field)
| EVAL int_val = TO_INTEGER(string_field)
| EVAL ts = TO_DATETIME(date_string)
| EVAL ip_addr = TO_IP(ip_string)
Execute ES|QL via the _query endpoint:
POST /_query
{
"query": "FROM logs-* | WHERE status >= 400 | LIMIT 10"
}
Response formats: json (default), csv, tsv, txt
POST /_query?format=csv
For comprehensive function references and advanced patterns:
references/functions.md - Complete function reference by categoryreferences/advanced-patterns.md - Complex query patterns and optimizationexamples/common-queries.esql - Ready-to-use query templates