npx claudepluginhub microsoft/skills-for-fabric --plugin fabric-consumptionThis skill uses the workspace's default tool permissions.
> **Update Check — ONCE PER SESSION (mandatory)**
Executes KQL management commands via CLI for Fabric Eventhouse and KQL Databases: tables, ingestion (inline/storage/streaming), policies, functions, materialized views, mappings. For schema authoring and pipelines.
Query and analyze data in Azure Data Explorer (Kusto/ADX) using KQL for log analytics, telemetry, time series, and anomaly detection.
Executes ES|QL queries against Elasticsearch for querying data, analyzing logs, aggregating metrics, exploring datasets, and creating charts/dashboards.
Share bugs, ideas, or general feedback.
Update Check — ONCE PER SESSION (mandatory) The first time this skill is used in a session, run the check-updates skill before proceeding.
- GitHub Copilot CLI / VS Code: invoke the
check-updatesskill.- Claude Code / Cowork / Cursor / Windsurf / Codex: compare local vs remote package.json version.
- Skip if the check was already performed earlier in this session.
CRITICAL NOTES
- To find the workspace details (including its ID) from workspace name: list all workspaces and, then, use JMESPath filtering
- To find the item details (including its ID) from workspace ID, item type, and item name: list all items of that type in that workspace and, then, use JMESPath filtering
| Task | Reference | Notes |
|---|---|---|
| Finding Workspaces and Items in Fabric | COMMON-CLI.md § Finding Workspaces and Items in Fabric | Mandatory — READ link first [needed for finding workspace id by its name or item id by its name, item type, and workspace id] |
| Fabric Topology & Key Concepts | COMMON-CORE.md § Fabric Topology & Key Concepts | |
| Environment URLs | COMMON-CORE.md § Environment URLs | KQL Cluster URI is per-item |
| Authentication & Token Acquisition | COMMON-CORE.md § Authentication & Token Acquisition | Wrong audience = 401; read before any auth issue |
| Core Control-Plane REST APIs | COMMON-CORE.md § Core Control-Plane REST APIs | |
| Pagination | COMMON-CORE.md § Pagination | |
| Long-Running Operations (LRO) | COMMON-CORE.md § Long-Running Operations (LRO) | |
| Rate Limiting & Throttling | COMMON-CORE.md § Rate Limiting & Throttling | |
| OneLake Data Access | COMMON-CORE.md § OneLake Data Access | Requires storage.azure.com token, not Fabric token |
| Job Execution | COMMON-CORE.md § Job Execution | |
| Capacity Management | COMMON-CORE.md § Capacity Management | |
| Gotchas & Troubleshooting | COMMON-CORE.md § Gotchas & Troubleshooting | |
| Best Practices | COMMON-CORE.md § Best Practices | |
| Tool Selection Rationale | COMMON-CLI.md § Tool Selection Rationale | |
| Authentication Recipes | COMMON-CLI.md § Authentication Recipes | az login flows and token acquisition |
Fabric Control-Plane API via az rest | COMMON-CLI.md § Fabric Control-Plane API via az rest | Always pass --resource https://api.fabric.microsoft.com or az rest fails |
| Pagination Pattern | COMMON-CLI.md § Pagination Pattern | |
| Long-Running Operations (LRO) Pattern | COMMON-CLI.md § Long-Running Operations (LRO) Pattern | |
OneLake Data Access via curl | COMMON-CLI.md § OneLake Data Access via curl | Use curl not az rest (different token audience) |
| Job Execution (CLI) | COMMON-CLI.md § Job Execution | |
| OneLake Shortcuts | COMMON-CLI.md § OneLake Shortcuts | |
| Capacity Management (CLI) | COMMON-CLI.md § Capacity Management | |
| Composite Recipes | COMMON-CLI.md § Composite Recipes | |
| Gotchas & Troubleshooting (CLI-Specific) | COMMON-CLI.md § Gotchas & Troubleshooting (CLI-Specific) | az rest audience, shell escaping, token expiry |
Quick Reference: az rest Template | COMMON-CLI.md § Quick Reference: az rest Template | |
| Quick Reference: Token Audience / CLI Tool Matrix | COMMON-CLI.md § Quick Reference: Token Audience ↔ CLI Tool Matrix | Which --resource + tool for each service |
| Connection Fundamentals | EVENTHOUSE-CONSUMPTION-CORE.md § Connection Fundamentals | Cluster URI discovery, az rest, REST API |
| Schema Discovery and Security | EVENTHOUSE-CONSUMPTION-CORE.md § Schema Discovery and Security | Schema Discovery, Security — workspace roles + KQL DB roles |
| Monitoring and Diagnostics | EVENTHOUSE-CONSUMPTION-CORE.md § Monitoring and Diagnostics | |
| Performance Best Practices | EVENTHOUSE-CONSUMPTION-CORE.md § Performance Best Practices | Read before writing KQL — time filters, has vs contains |
| Common Consumption Patterns | EVENTHOUSE-CONSUMPTION-CORE.md § Common Consumption Patterns | Time-series, Top-N, percentile, dynamic fields |
| Gotchas, Troubleshooting, and Quick Reference | EVENTHOUSE-CONSUMPTION-CORE.md § Gotchas, Troubleshooting, and Quick Reference | Gotchas and Troubleshooting (12 issues), Quick Reference: Consumption Capabilities by Scenario |
| Table and Column Discovery | discovery-queries.md § Table and Column Discovery | Table Discovery, Column Statistics |
| Function and View Discovery | discovery-queries.md § Function and View Discovery | Function Discovery, Materialized View Discovery |
| Policy Discovery | discovery-queries.md § Policy Discovery | |
| External Tables and Ingestion Mappings | discovery-queries.md § External Tables and Ingestion Mappings | External Table Discovery, Ingestion Mapping Discovery |
| Security Discovery | discovery-queries.md § Security Discovery | |
| Database Overview Script | discovery-queries.md § Database Overview Script | |
| Tool Stack | SKILL.md § Tool Stack | |
| Connection | SKILL.md § Connection | eventhouse-specific az rest connection steps |
| Agentic Exploration ("Chat With My Data") | SKILL.md § Agentic Exploration | Start here for data exploration |
| Running Queries | SKILL.md § Running Queries | az rest, output formatting, export |
| Monitoring | SKILL.md § Monitoring | |
| Must / Prefer / Avoid / Troubleshooting | SKILL.md § Must / Prefer / Avoid / Troubleshooting | MUST DO / AVOID / PREFER checklists |
| Examples | SKILL.md § Examples | |
| Agent Integration Notes | SKILL.md § Agent Integration Notes |
| Tool | Purpose | Install |
|---|---|---|
| az cli | KQL queries and management commands via Kusto REST API; Fabric control-plane discovery | winget install Microsoft.AzureCLI |
| jq | JSON processing and output formatting | winget install jqlang.jq |
# Get workspace ID (if not known)
WS_ID=$(az rest --method GET \
--url "https://api.fabric.microsoft.com/v1/workspaces" \
--resource "https://api.fabric.microsoft.com" \
| jq -r '.value[] | select(.displayName=="MyWorkspace") | .id')
# List KQL Databases and get connection properties
az rest --method GET \
--url "https://api.fabric.microsoft.com/v1/workspaces/${WS_ID}/kqlDatabases" \
--resource "https://api.fabric.microsoft.com" \
| jq '.value[] | {name: .displayName, id: .id, queryUri: .properties.queryServiceUri, dbName: .properties.databaseName}'
CLUSTER_URI="https://<cluster>.kusto.fabric.microsoft.com"
DB_NAME="MyKqlDatabase"
Important — body file pattern: KQL queries contain
|(pipe) characters which break shell escaping in both bash and PowerShell. Always write the JSON body to a temp file and reference it with--body @<file>. This is the recommended approach for allaz restKQL calls. On PowerShell, use@{db="X";csl="..."} | ConvertTo-Json -Compress | Out-File $env:TEMP\kql_body.json -Encoding utf8NoBOMthen--body "@$env:TEMP\kql_body.json".
# Write body to temp file (avoids pipe escaping issues)
cat > /tmp/kql_body.json << 'EOF'
{"db":"MyKqlDatabase","csl":"print Message = 'Connected successfully', Cluster = current_cluster_endpoint(), Timestamp = now()"}
EOF
az rest --method POST \
--url "${CLUSTER_URI}/v1/rest/query" \
--resource "https://kusto.kusto.windows.net" \
--headers "Content-Type=application/json" \
--body @/tmp/kql_body.json \
| jq '.Tables[0].Rows'
When the user asks to explore or query an Eventhouse without specifying tables:
Step 1 → .show tables // discover tables
Step 2 → .show table <TABLE> schema as json // understand columns + types
Step 3 → <TABLE> | take 10 // see sample data
Step 4 → <TABLE> | summarize count() by bin(Timestamp, 1h) | render timechart // shape of data
Step 5 → Formulate targeted query based on user's question
After schema discovery, generate queries using actual column names and types:
// Example: user asks "show me errors in the last hour"
// After discovering table "AppEvents" with columns: Timestamp, Level, Message, Source
AppEvents
| where Timestamp > ago(1h)
| where Level == "Error"
| summarize ErrorCount = count() by Source, bin(Timestamp, 5m)
| order by ErrorCount desc
az restAlways use the temp-file pattern for
--body— KQL pipes (|) break inline shell escaping.
# Run a KQL query
cat > /tmp/kql_body.json << 'EOF'
{"db":"MyDB","csl":"Events | where Timestamp > ago(1h) | count"}
EOF
az rest --method POST \
--url "${CLUSTER_URI}/v1/rest/query" \
--resource "https://kusto.kusto.windows.net" \
--headers "Content-Type=application/json" \
--body @/tmp/kql_body.json \
| jq '.Tables[0].Rows'
# Pretty-print results as a table with jq
cat > /tmp/kql_body.json << 'EOF'
{"db":"MyDB","csl":".show tables"}
EOF
az rest --method POST \
--url "${CLUSTER_URI}/v1/rest/query" \
--resource "https://kusto.kusto.windows.net" \
--headers "Content-Type=application/json" \
--body @/tmp/kql_body.json \
| jq '.Tables[0] | [.Columns[].ColumnName] as $cols | .Rows[] | [$cols, .] | transpose | map({(.[0]): .[1]}) | add'
# Save results to file
cat > /tmp/kql_body.json << 'EOF'
{"db":"MyDB","csl":"Events | where Timestamp > ago(1h) | summarize count() by EventType"}
EOF
az rest --method POST \
--url "${CLUSTER_URI}/v1/rest/query" \
--resource "https://kusto.kusto.windows.net" \
--headers "Content-Type=application/json" \
--body @/tmp/kql_body.json \
--output-file results.json
// Active queries
.show queries
// Recent commands (last hour)
.show commands
| where StartedOn > ago(1h)
| project StartedOn, CommandType, Text = substring(Text, 0, 80), Duration, State
| order by StartedOn desc
// Ingestion failures (for context when data seems stale)
.show ingestion failures
| where FailedOn > ago(24h)
| summarize count() by ErrorCode
| top 5 by count_
where Timestamp > ago(...) must be present on time-series tables..show tables and .show table T schema as json first.has for term search — indexed and fast; only fall back to contains for substring needs.az rest for CLI query sessions; Fabric KQL MCP server for agent-integrated workflows.project early to drop unneeded columns before aggregation.materialize() when a sub-expression is used multiple times.take 100 for initial exploration; avoid full table scans.render timechart for time-series; render piechart for distribution.contains on large tables — full scan, not indexed. Use has or has_cs.join without filtering both sides first — causes memory explosion.SELECT * equivalent (project all columns) on wide tables.bin() in time-series summarize — produces one row per unique timestamp.| Symptom | Fix |
|---|---|
az rest auth fails | Run az login first; ensure --resource "https://kusto.kusto.windows.net" is set |
| Empty results on valid table | Check database context; may need database("name").table |
| Query timeout | Add tighter time filter; check .show queries for competing queries |
Forbidden (403) | Request viewer role on the KQL Database |
| Results truncated | Default limit is 500K rows; add set truncationmaxrecords = N; before query |
KQL pipe | breaks PowerShell or bash | Never inline KQL in --body. Write JSON to a temp file and use --body @file.json (see Running Queries) |
# 1. Set connection variables (after discovering URI via Step 1)
CLUSTER_URI="https://<your-cluster>.kusto.fabric.microsoft.com"
DB_NAME="SalesDB"
# 2. Discover tables
cat > /tmp/kql_body.json << EOF
{"db":"${DB_NAME}","csl":".show tables"}
EOF
az rest --method POST \
--url "${CLUSTER_URI}/v1/rest/query" \
--resource "https://kusto.kusto.windows.net" \
--headers "Content-Type=application/json" \
--body @/tmp/kql_body.json \
| jq '.Tables[0].Rows'
# 3. Explore schema
cat > /tmp/kql_body.json << EOF
{"db":"${DB_NAME}","csl":".show table Orders schema as json"}
EOF
az rest --method POST \
--url "${CLUSTER_URI}/v1/rest/query" \
--resource "https://kusto.kusto.windows.net" \
--headers "Content-Type=application/json" \
--body @/tmp/kql_body.json \
| jq '.Tables[0].Rows'
# 4. Sample data
cat > /tmp/kql_body.json << EOF
{"db":"${DB_NAME}","csl":"Orders | take 10"}
EOF
az rest --method POST \
--url "${CLUSTER_URI}/v1/rest/query" \
--resource "https://kusto.kusto.windows.net" \
--headers "Content-Type=application/json" \
--body @/tmp/kql_body.json \
| jq '.Tables[0].Rows'
// 5. Analytical query (via az rest --body @file)
Orders
| where OrderDate > ago(30d)
| summarize
TotalOrders = count(),
TotalRevenue = sum(Amount)
by bin(OrderDate, 1d)
| render timechart
// Query across KQL databases in the same Eventhouse
let orders = database("SalesDB").Orders | where OrderDate > ago(7d);
let products = database("CatalogDB").Products;
orders
| join kind=inner (products) on ProductId
| summarize Revenue = sum(Amount) by ProductName
| top 10 by Revenue desc
# Run query and save results to JSON
cat > /tmp/kql_body.json << 'EOF'
{"db":"MyDB","csl":"Events | where Timestamp > ago(1d) | summarize count() by EventType"}
EOF
az rest --method POST \
--url "${CLUSTER_URI}/v1/rest/query" \
--resource "https://kusto.kusto.windows.net" \
--headers "Content-Type=application/json" \
--body @/tmp/kql_body.json \
--output-file results.json
# Convert to CSV with jq
cat results.json \
| jq -r '.Tables[0] | (.Columns | map(.ColumnName)), (.Rows[]) | @csv' > results.csv
fabric-kql in mcp-setup/mcp-config-template.json) can be used as an alternative to az rest for agent-integrated query execution.