airlayer semantic SQL compiler — compile .view.yml schemas into dialect-specific SQL
npx claudepluginhub oxy-hq/airlayerSemantic SQL compiler — compile .view.yml schema definitions into dialect-specific SQL. Unix-philosophy CLI designed as a tool-use interface for LLMs.
Claude Code marketplace entries for the plugin-safe Antigravity Awesome Skills library and its compatible editorial bundles.
Directory of popular Claude Code extensions including development tools, productivity plugins, and MCP integrations
Curated collection of 141 specialized Claude Code subagents organized into 10 focused categories
Share bugs, ideas, or general feedback.
An in-process semantic engine that compiles .view.yml definitions into dialect-specific SQL — and optionally executes queries against real databases. Built in Rust as both a library and CLI tool.
bash <(curl -sSfL https://raw.githubusercontent.com/oxy-hq/airlayer/main/install_airlayer.sh)
Then initialize a project within an empty directory:
mkdir my-project && cd my-project
airlayer init
This connects to your database, discovers your schema, and generates config.yml, .view.yml files, and Claude Code sub-agents for querying and building your semantic layer.
Given an orders.view.yml:
name: orders
table: public.orders
dialect: postgres
dimensions:
- name: status
type: string
expr: status
measures:
- name: total_revenue
type: sum
expr: amount
You can query it with the CLI as follows:
# add -x to execute against the database
airlayer query \
--dimension orders.status \
--measure orders.total_revenue \
--filter orders.status:equals:active \
--limit 10
Which will compile to the following SQL, returned to stdout:
SELECT
"orders".status AS "orders__status",
SUM("orders".amount) AS "orders__total_revenue"
FROM public.orders AS "orders"
WHERE ("orders".status = 'active')
GROUP BY 1
LIMIT 10
| Database | Dialect flag | Execution |
|---|---|---|
| PostgreSQL | postgres | Native (TCP) |
| MySQL | mysql | Native (TCP) |
| BigQuery | bigquery | REST API (OAuth2) |
| Snowflake | snowflake | REST API (session auth) |
| DuckDB | duckdb | In-process (libduckdb) |
| ClickHouse | clickhouse | HTTP API |
| Databricks | databricks | SQL Statement API |
| Redshift | redshift | Native (TCP, via Postgres wire protocol) |
| Presto / Trino | presto | REST API (polling) |
| SQLite | sqlite | In-process (libsqlite3) |
| MotherDuck | duckdb | In-process (md: protocol) |
| Domo | domo | REST API |
All dialects support SQL compilation. Execution requires the corresponding feature flag (e.g., exec-postgres). See docs/dialects.md for per-dialect SQL behavior.
airlayer can be used in two ways:
Project mode (CLI) — You have a directory with config.yml and .view.yml files (plus optional .motif.yml and .query.yml files). The config.yml file anchors the project: all CLI commands auto-detect the project root by walking up from the current directory until they find it. This means you can run commands from any subdirectory without specifying --config:
cd my-project/ # anywhere inside the project
airlayer query -x --measure orders.total_revenue # just works
airlayer inspect --motifs # just works
airlayer query queries/revenue_investigation.query.yml -x # just works
Library mode (Python / JS / Rust) — You embed airlayer as a library and pass view definitions, motifs, and queries programmatically. No config.yml or filesystem structure is needed — everything is constructed in code. Available as a Python package and an npm package (WebAssembly).
import airlayer
result = airlayer.compile(
views_yaml=[open("orders.view.yml").read()],
query_json='{"measures": ["orders.total_revenue"], "dimensions": ["orders.status"]}',
dialect="postgres",
)
print(result["sql"])
import init, { compile } from 'airlayer';
await init();
const result = compile(
[ordersViewYaml],
JSON.stringify({ measures: ['orders.total_revenue'], dimensions: ['orders.status'] }),
'postgres'
);
console.log(result.sql);
This project uses just as a task runner. Install with cargo install just, then run just to see all available recipes.
just build # core only (no database drivers)
just build-all # with all database drivers
just build-wasm # WebAssembly package (output in pkg/)
just build-python # Python package (dev install into current venv)
just build-python-release # Python wheel (release)
just test # tier 1: unit tests + in-process integration (DuckDB, SQLite)
just test-docker # tier 2: starts Docker DBs + runs tests
just test-cloud # tier 3: Snowflake, BigQuery, Databricks, MotherDuck
just test-all # all tiers
just lint # clippy lints
just fmt # format code
See docs/testing.md for the full three-tier testing strategy.