From elastic-agent-skills
Create and manage Kibana dashboards and visualizations declaratively via Node.js scripts. Supports GitOps, version control, and automated deployment for Kibana 9.4+.
npx claudepluginhub elastic/agent-skills --plugin elastic-cloudThis skill uses the workspace's default tool permissions.
The Kibana dashboards and visualizations APIs provide a declarative, Git-friendly format for defining dashboards and
assets/bar-chart-esql.jsonassets/dashboard-basic.jsonassets/dashboard-with-visualizations.jsonassets/datatable.jsonassets/demo-dashboard.jsonassets/ecommerce-analytics-dashboard.jsonassets/line-chart-timeseries.jsonassets/metric-esql.jsonpackage.jsonreferences/chart-types-reference.mdreferences/dashboard-api-reference.mdreferences/visualizations-api-reference.mdscripts/kibana-dashboards.jsConducts multi-round deep research on GitHub repos via API and web searches, generating markdown reports with executive summaries, timelines, metrics, and Mermaid diagrams.
Dynamically discovers and combines enabled skills into cohesive, unexpected delightful experiences like interactive HTML or themed artifacts. Activates on 'surprise me', inspiration, or boredom cues.
Generates images from structured JSON prompts via Python script execution. Supports reference images and aspect ratios for characters, scenes, products, visuals.
The Kibana dashboards and visualizations APIs provide a declarative, Git-friendly format for defining dashboards and visualizations. Definitions are minimal, diffable, and suitable for version control and LLM-assisted generation.
Key Benefits:
Version Requirement: Kibana 9.4+ (SNAPSHOT)
ES|QL Visualizations: ES|QL-based visualizations cannot be created via
/api/visualizations. They must be created as inline panels within dashboards using the Dashboard API.Inline vs Saved Object References: When embedding visualization panels in dashboards, prefer inline definitions over
ref_idreferences. Inline definitions are more reliable and self-contained.
Kibana connection is configured via environment variables. Run node scripts/kibana-dashboards.js test to verify the
connection. If the test fails, suggest these setup options to the user, then stop. Do not try to explore further until a
successful connection test.
export KIBANA_CLOUD_ID="deployment-name:base64encodedcloudid"
export KIBANA_API_KEY="base64encodedapikey"
export KIBANA_URL="https://your-kibana:5601"
export KIBANA_API_KEY="base64encodedapikey"
export KIBANA_URL="https://your-kibana:5601"
export KIBANA_USERNAME="elastic"
export KIBANA_PASSWORD="changeme"
Use start-local to spin up Elasticsearch/Kibana locally, then source the
generated .env:
curl -fsSL https://elastic.co/start-local | sh
source elastic-start-local/.env
export KIBANA_URL="$KB_LOCAL_URL"
export KIBANA_USERNAME="elastic"
export KIBANA_PASSWORD="$ES_LOCAL_PASSWORD"
Then run node scripts/kibana-dashboards.js test to verify the connection.
export KIBANA_INSECURE="true"
# Test connection and API availability
node scripts/kibana-dashboards.js test
# Dashboard operations
node scripts/kibana-dashboards.js dashboard get <id>
echo '<json>' | node scripts/kibana-dashboards.js dashboard create -
echo '<json>' | node scripts/kibana-dashboards.js dashboard update <id> -
node scripts/kibana-dashboards.js dashboard delete <id>
echo '<json>' | node scripts/kibana-dashboards.js dashboard upsert <id> -
# Visualization operations (standalone saved objects)
node scripts/kibana-dashboards.js vis list
node scripts/kibana-dashboards.js vis get <id>
echo '<json>' | node scripts/kibana-dashboards.js vis create -
echo '<json>' | node scripts/kibana-dashboards.js vis update <id> -
node scripts/kibana-dashboards.js vis delete <id>
echo '<json>' | node scripts/kibana-dashboards.js vis upsert <id> -
The API expects a flat request body with title and panels at the root level. The response wraps these in a data
envelope alongside id, meta, and spaces.
{
"title": "My Dashboard",
"panels": [ ... ],
"time_range": {
"from": "now-24h",
"to": "now"
}
}
Note: Dashboard IDs are auto-generated by the API. The script also accepts the legacy wrapped format
{ id?, data: { title, panels }, spaces? }and unwraps it automatically.
Use inline definitions (properties directly in config) for self-contained, portable dashboards:
{
"title": "My Dashboard",
"panels": [
{
"type": "vis",
"id": "metric-panel",
"grid": { "x": 0, "y": 0, "w": 12, "h": 6 },
"config": {
"title": "",
"type": "metric",
"data_source": { "type": "esql", "query": "FROM logs | STATS total = COUNT(*)" },
"metrics": [{ "type": "primary", "column": "total", "label": "Total Count" }]
}
},
{
"type": "vis",
"id": "chart-panel",
"grid": { "x": 12, "y": 0, "w": 36, "h": 8 },
"config": {
"title": "Events Over Time",
"type": "xy",
"axis": {
"x": { "scale": "temporal", "domain": { "type": "fit", "rounding": false } }
},
"layers": [
{
"type": "area",
"data_source": {
"type": "esql",
"query": "FROM logs | WHERE @timestamp <= ?_tend AND @timestamp > ?_tstart | STATS count = COUNT(*) BY BUCKET(@timestamp, 75, ?_tstart, ?_tend)"
},
"x": { "column": "BUCKET(@timestamp, 75, ?_tstart, ?_tend)", "label": "@timestamp" },
"y": [{ "column": "count" }]
}
]
}
}
],
"time_range": { "from": "now-24h", "to": "now" }
}
Dashboards use a 48-column, infinite-row grid. On 16:9 screens, approximately 20-24 rows are visible without scrolling. Design for density—place primary KPIs and key trends above the fold.
| Width | Columns | Height | Rows | Use Case |
|---|---|---|---|---|
| Full | 48 | Large | 14-16 | Wide time series, tables |
| Half | 24 | Standard | 10-12 | Primary charts |
| Quarter | 12 | Compact | 5-6 | KPI metrics |
| Sixth | 8 | Minimal | 4-5 | Dense metric rows |
Target: 8-12 panels above the fold. Use descriptive panel titles on the charts themselves instead of adding markdown headers.
Grid Packing Rules:
y + h) of every panel. When starting a new row or
placing a panel below another, its y coordinate must exactly match the y + h of the panel immediately above it.y coordinate),
they should generally have the exact same height (h). If they do not, you must fill the resulting empty vertical
space before placing the next full-width panel.{
"type": "vis",
"id": "unique-panel-id",
"grid": { "x": 0, "y": 0, "w": 24, "h": 15 },
"config": { ... }
}
| Property | Type | Required | Description |
|---|---|---|---|
type | string | Yes | Embeddable type (e.g., vis, markdown, map) |
id | string | No | Unique panel ID (auto-generated if omitted) |
grid | object | Yes | Position and size (x, y, w, h) |
config | object | Yes | Panel-specific configuration |
| Type | Description | ES|QL Support |
|---|---|---|
metric | Single metric value display | Yes |
xy | Line, area, bar charts | Yes |
gauge | Gauge visualizations | Yes |
heatmap | Heatmap charts | Yes |
tag_cloud | Tag/word cloud | Yes |
data_table | Data tables | Yes |
region_map | Region/choropleth maps | Yes |
pie, treemap, mosaic, waffle | Partition charts | Yes |
Note: To create donut charts, use
piewithdonut_holeset to"s","m", or"l"(small, medium, large hole). Use"none"for a solid pie.
There are three dataset types supported in the Visualizations API. Each uses different patterns for specifying metrics and dimensions.
Use data_view_reference with aggregation operations. Kibana performs the aggregations automatically.
{
"data_source": {
"type": "data_view_reference",
"ref_id": "90943e30-9a47-11e8-b64d-95841ca0b247"
}
}
Available operations: count, average, sum, max, min, unique_count, median, standard_deviation,
percentile, percentile_rank, last_value, date_histogram, terms. See
Chart Types Reference for details.
Use esql with a query string. Reference the output columns using { column: 'column_name' }.
{
"data_source": {
"type": "esql",
"query": "FROM logs | STATS count = COUNT(), avg_bytes = AVG(bytes) BY host"
}
}
ES|QL Column Reference Pattern:
{ "column": "count" }
Key Difference: With ES|QL, you write the aggregation in the query itself, then reference the resulting columns. With data view, you specify the aggregation operation and Kibana performs it.
Important: ES|QL visualizations cannot be created via
/api/visualizations. They must be created as inline panels in dashboards via the Dashboard API.
Use index for ad-hoc index patterns without a saved data view:
{
"data_source": {
"type": "data_view_spec",
"index_pattern": "logs-*",
"time_field": "@timestamp"
}
}
For detailed schemas and all chart type options, see Chart Types Reference.
Metric (Data View):
{
"type": "metric",
"data_source": { "type": "data_view_reference", "ref_id": "90943e30-9a47-11e8-b64d-95841ca0b247" },
"metrics": [{ "type": "primary", "operation": "count", "label": "Total Requests" }]
}
Metric (ES|QL):
{
"type": "metric",
"data_source": { "type": "esql", "query": "FROM logs | STATS count = COUNT()" },
"metrics": [{ "type": "primary", "column": "count", "label": "Total Requests" }]
}
XY Bar Chart (Data View):
{
"title": "Top Hosts",
"type": "xy",
"axis": { "x": { "title": { "visible": false } }, "y": { "anchor": "start", "title": { "visible": false } } },
"layers": [
{
"type": "bar_horizontal",
"data_source": { "type": "data_view_reference", "ref_id": "90943e30-9a47-11e8-b64d-95841ca0b247" },
"x": { "operation": "terms", "fields": ["host.keyword"], "limit": 10 },
"y": [{ "operation": "count" }]
}
]
}
XY Time Series (ES|QL):
{
"title": "Requests Over Time",
"type": "xy",
"axis": {
"x": { "title": { "visible": false }, "scale": "temporal", "domain": { "type": "fit", "rounding": false } },
"y": { "anchor": "start", "title": { "visible": false } }
},
"layers": [
{
"type": "line",
"data_source": {
"type": "esql",
"query": "FROM logs | WHERE @timestamp <= ?_tend AND @timestamp > ?_tstart | STATS count = COUNT() BY BUCKET(@timestamp, 75, ?_tstart, ?_tend)"
},
"x": { "column": "BUCKET(@timestamp, 75, ?_tstart, ?_tend)", "label": "@timestamp" },
"y": [{ "column": "count" }]
}
]
}
Tip: Always hide axis titles when the panel title is descriptive. Use
bar_horizontalfor categorical data with long labels. Useaxisfor axis configuration.
See assets/ for ready-to-use definitions: demo-dashboard.json, dashboard-with-visualizations.json,
metric-esql.json, bar-chart-esql.json, line-chart-timeseries.json.
| Error | Solution |
|---|---|
| "401 Unauthorized" | Check KIBANA_USERNAME/PASSWORD or KIBANA_API_KEY |
| "404 Not Found" | Verify dashboard/visualization ID exists |
| "409 Conflict" | Dashboard/viz already exists; delete first or use update |
| Schema validation error | Ensure column names match query output; use { column: 'name' } for ES|QL |
| Metric chart structure | Requires metrics array: [{ type: 'primary', ... }] |
| XY chart fails | Put data_source inside each layer, use axis (singular) |
| ref_id panels missing | Prefer inline definitions (properties in config) over ref_id |
Design for density — Operational dashboards must show 8-12 panels above the fold (within the first 24 rows). Use
compact panel heights: metrics MUST be h=4 to h=6, and charts MUST be h=8 to h=12.
Never use Markdown for titles/headers — Do NOT add markdown panels to act as dashboard titles or section
dividers. This wastes critical vertical space. Use descriptive panel titles on the charts themselves.
Prioritize above the fold — Primary KPIs and key trends must be placed at y=0. Deep-dives and data tables
should be placed below the charts.
Use descriptive chart titles, hide axis titles — Write titles that explain what the chart shows (e.g., "Requests
by Response Code"). A good panel title makes axis titles redundant. Always set axis.x.title.visible: false and
axis.y.title.visible: false.
Choose the right dataset type — Use data_view_reference for simple aggregations, esql for complex queries
Inline definitions — Prefer inline properties in config over config.ref_id for portable dashboards
Test connection first — Run node scripts/kibana-dashboards.js test before creating resources
Get existing examples — Use vis get <id> to see the exact schema for different chart types (the CLI subcommand
is vis)
Avoid redundant metric labels — For ES|QL metrics, avoid using both a panel title and an inner metric label, as
it wastes space. Set the panel title to "" and configure the human-readable label by aliasing the ES|QL column
name using backticks (e.g., STATS `Total Requests` = COUNT() and "column": "Total Requests").
Format numbers with units — Use the format property on metrics and y-axis columns to display proper units
instead of raw numbers. Types: bytes, bits, number, percent, duration, custom. Example:
"format": { "type": "bytes", "decimals": 0 }. See Chart Types Reference for
the full format table.
| Aspect | Data View | ES|QL |
|---|---|---|
| Dataset | { type: 'data_view_reference', ref_id: '...' } | { type: 'esql', query: '...' } |
| Metric chart | metrics: [{ type: 'primary', operation: 'count' }] | metrics: [{ type: 'primary', column: 'col' }] |
| XY columns | { operation: 'terms', fields: ['host'], limit: 10 } | { column: 'host' } |
| Static values | { operation: 'static_value', value: 100 } | Use EVAL in query (see below) |
| XY data_source | Inside each layer | Inside each layer |
| Tagcloud | tag_by: { operation: 'terms', ... } | tag_by: { column: '...' } |
| Datatable props | metrics, rows arrays | metrics, rows arrays with { column: '...' } |
Key Pattern: ES|QL uses
{ column: 'column_name' }to reference columns from the query result. The aggregation happens in the ES|QL query itself. Usedata_sourcefor all data source configuration.Data source types: Use
data_view_reference(withref_id) for saved data views,data_view_spec(withindex_pattern) for ad-hoc index patterns, andesqlfor ES|QL queries.
Use BUCKET(@timestamp, n, ?_tstart, ?_tend) for time series charts. The numeric argument is the target number of
buckets. Kibana injects ?_tstart/?_tend automatically. Do not reassign the result — use the full expression
BUCKET(@timestamp, 75, ?_tstart, ?_tend) as both the BY clause and the column reference. Set "label" to provide
a friendly display name:
"x": { "column": "BUCKET(@timestamp, 75, ?_tstart, ?_tend)", "label": "@timestamp" }
Important: To get a proper multilevel time axis (e.g., "9th / April 2026 / 10th") instead of raw timestamp labels,
you must set "scale": "temporal" on the x-axis:
"axis": {
"x": { "scale": "temporal", "domain": { "type": "fit", "rounding": false } }
}
Without "scale": "temporal", Kibana treats the bucket column as categorical text and renders unsorted, verbose
timestamp strings.
FROM logs | WHERE @timestamp <= ?_tend AND @timestamp > ?_tstart | STATS count = COUNT(*) BY BUCKET(@timestamp, 75, ?_tstart, ?_tend)
Note:
BUCKET(@timestamp, n, ?_tstart, ?_tend)requires aWHEREclause with?_tstart/?_tendbounds (Kibana injects these). Alternatively, useBUCKET(@timestamp, 1 hour)with a fixed duration — this does not require parameters but won't auto-scale.
Use DATE_EXTRACT(part, date) with ES|QL part names (not SQL keywords). The part string must be double-quoted. Common
parts: "hour_of_day", "day_of_week", "day_of_month", "month_of_year", "year", "day_of_year".
FROM logs | STATS count = COUNT() BY hour = DATE_EXTRACT("hour_of_day", @timestamp), day = DATE_EXTRACT("day_of_week", @timestamp)
ES|QL does not support static_value operations. Instead, create constant columns using EVAL:
FROM logs | STATS count = COUNT() | EVAL max_value = 20000, goal = 15000
Then reference with { "column": "max_value" }. For dynamic reference values, use aggregation functions like
PERCENTILE() or MAX() in the query.
The APIs follow these principles: