From axiom-cli
Explores Axiom datasets via CLI: schema discovery, sample data, volume trends, field distributions, stats, and error patterns. Use for new dataset discovery or structure investigation.
npx claudepluginhub axiomhq/cliThis skill is limited to using the following tools:
Systematically explore an Axiom dataset to understand its structure, content, and potential use cases.
Explores datasets interactively: lists tables with stats, previews rows and schemas, shows column distributions, flags quality issues with SWD-styled charts.
Provides APL query language reference for Axiom observability data, with operators, functions, patterns, CLI usage, and workflows. Useful for writing, debugging, and optimizing queries.
Profiles unfamiliar datasets: assesses schema structure, column distributions, data quality, null rates, cardinality, outliers, table relationships, temporal coverage. Use for onboarding data sources, auditing freshness, discovering foreign keys.
Share bugs, ideas, or general feedback.
Systematically explore an Axiom dataset to understand its structure, content, and potential use cases.
When invoked with a dataset name (e.g., /explore-dataset logs), the name is available as $ARGUMENTS.
If no dataset specified, list what's available:
axiom dataset list -f json
Always start here. Discover actual field names and types:
axiom query "['<dataset>'] | getschema" --start-time -1h
Identify:
OTel trace data: If schema contains trace_id, span_id, attributes.*, note that:
['service.name'] not ['resource.service.name']['attributes.custom']['field'] with tostring() for aggregationsaxiom-apl skill's OTel reference for field mappingsExamine actual values:
axiom query "['<dataset>'] | limit 10" --start-time -1h -f json
Look for:
Understand data volume patterns:
axiom query "['<dataset>'] | summarize count() by bin(_time, 1h) | sort by _time asc" --start-time -24h
Analyze:
For each key categorical field (status, level, service):
axiom query "['<dataset>'] | summarize count() by <field> | top 20 by count_" --start-time -1h
Identify:
For numeric fields (duration, bytes, count):
axiom query "['<dataset>'] | summarize count(), min(<field>), max(<field>), avg(<field>), percentiles(<field>, 50, 95, 99)" --start-time -1h
Search for error indicators:
axiom query "search in (['<dataset>']) 'error' or 'fail' or 'exception' | limit 20" --start-time -1h
Provide a summary including:
## Dataset Summary: <name>
### Purpose
<What system generated this data, what it represents>
### Key Fields
| Field | Type | Description |
|-------|------|-------------|
| ... | ... | ... |
### Volume
- Events per hour: ~X
- Data freshness: last event at X
### Key Dimensions
- `status`: 200, 400, 500, ...
- `service.name`: api, web, worker, ...
### Recommended Queries
<Common queries for this dataset>
### Monitoring Opportunities
<What could be alerted on>
getschema directly for single field lookupssearch); extract patterns then optimizeFor query syntax, invoke the axiom-apl skill which provides comprehensive documentation on operators, functions, and patterns.