From omni-analytics
Convert an Omni Analytics topic into a Snowflake Semantic View YAML definition. Use this skill whenever someone wants to export Omni metrics to Snowflake, create a Semantic View from an Omni topic, harden BI metrics into the warehouse, or bridge Omni's semantic layer with Snowflake Cortex Analyst.
npx claudepluginhub exploreomni/omni-agent-skills --plugin omni-integrationsThis skill uses the workspace's default tool permissions.
Converts an Omni topic into a Snowflake Semantic View YAML definition by first exploring the Omni model via API, then translating its definitions into the Snowflake Semantic View format.
Create and edit Omni Analytics semantic model definitions — views, topics, dimensions, measures, relationships, and query views — using YAML through the Omni CLI. Use this skill whenever someone wants to add a field, create a new dimension or measure, define a topic, set up joins between tables, modify the data model, build a new view, add a calculated field, create a relationship, edit YAML, work on a branch, promote model changes, or any variant of "model this data", "add this metric", "create a view for", or "set up a join between". Also use for migrating modeling patterns since Omni's YAML is conceptually similar to other semantic layer definitions.
Creates, alters, and validates Snowflake semantic views via Snowflake CLI (snow). Drafts DDL with synonyms/comments, validates temporarily, executes, and runs sample queries.
Guides creation and modification of dbt Semantic Layer components: semantic models, metrics (simple/derived/cumulative/ratio), dimensions, entities, time spines. Supports latest/legacy YAML specs and MetricFlow config.
Share bugs, ideas, or general feedback.
Converts an Omni topic into a Snowflake Semantic View YAML definition by first exploring the Omni model via API, then translating its definitions into the Snowflake Semantic View format.
# Verify the Omni CLI is installed — if not, ask the user to install it
# See: https://github.com/exploreomni/cli#readme
command -v omni >/dev/null || echo "ERROR: Omni CLI is not installed."
# Show available profiles and select the appropriate one
omni config show
# If multiple profiles exist, ask the user which to use, then switch:
omni config use <profile-name>
Tip: Use
-o jsonto force structured output for programmatic parsing, or-o humanfor readable tables. The default isauto(human in a TTY, JSON when piped).
Before starting, determine which environment you are running in — this controls how SQL is executed against Snowflake in Step 7.
Check for the presence of the built-in Snowflake SQL execution capability:
If the agent can execute a SQL statement like `SELECT CURRENT_USER()` directly without any shell command → you are in Cortex Code.
Otherwise → you are in an external environment.
If unsure, ask the user: "Are you running this inside Snowflake (Cortex Code / Notebooks), or in an external tool like Claude Code or Cursor?"
For Claude Code, Cursor, or any terminal-based agent, choose one of the following. Check availability in order:
snow) ✅ Recommended# Check if available
command -v snow
# Configure a connection (first time)
snow connection add
# Execute SQL
snow sql -q "SELECT CURRENT_USER();" --connection <connection_name>
Set a default connection to avoid repeating --connection on every call:
snow connection set-default <connection_name>
# Check if available
command -v snowsql
# Execute SQL
snowsql -a <account> -u <user> -q "SELECT CURRENT_USER();"
snowflake-connector-python)pip install snowflake-connector-python
import snowflake.connector
conn = snowflake.connector.connect(
account="<account>",
user="<user>",
password="<password>", # or use key-pair / SSO
warehouse="<warehouse>",
database="<database>",
schema="<schema>",
)
conn.cursor().execute("<SQL here>")
✋ STOP — Confirm the Snowflake connection method with the user before proceeding to Step 7. Record which method is being used so Step 7 generates the correct execution command.
Ask the user:
⚠️ STOP — Confirm all four answers before proceeding.
omni models list --modelkind SHARED
Identify the Shared Model and note its id. Always prefer the Shared Model over Schema or Workbook models.
omni models yaml-get <modelId> --filename <topic_name>.topic
omni models yaml-get <modelId> --filename relationships
For every view in base_view and joins, fetch its YAML:
omni models yaml-get <modelId> --filename <view_name>.view
If a view is prefixed with
omni_dbt_, fetch the file that also starts withomni_dbt_(e.g.omni_dbt_ecomm__order_items.view).
The base_view in the topic file is the primary table. Convert view names to Snowflake table references:
base_view / join value | Snowflake table |
|---|---|
ecomm__order_items | ECOMM.ORDER_ITEMS |
omni_dbt_ecomm__order_items | ECOMM.ORDER_ITEMS (strip omni_dbt_) |
The __ separator maps to schema (left) and table name (right). If the schema does not exist in Snowflake, skip that table entirely.
The joins parameter in the topic uses indentation to define the join chain — a table indented beneath another joins into its parent:
joins:
user_order_facts: {} # skip — this is a derived CTE, not a physical table
ecomm__users: {} # joins to base_view (ORDER_ITEMS)
ecomm__inventory_items: # joins to base_view (ORDER_ITEMS)
ecomm__products: # joins to INVENTORY_ITEMS
demo__product_images: {} # joins to PRODUCTS
ecomm__distribution_centers: {} # joins to PRODUCTS
Skip any view that is a derived table (CTE defined in SQL in Omni). These have no physical Snowflake table to reference.
In each view file, find the dimension with primary_key: true — this becomes the unique: true dimension in the Semantic View.
✋ STOP — Confirm the table list with the user before continuing.
The topic's fields parameter controls which fields from the views are included in the Semantic View.
| Syntax | Meaning |
|---|---|
(no fields parameter) | Include all fields from all views |
all_views.* | Include all fields from all views |
view.* | Include all fields in the named view |
tag:<value> | Include all fields tagged with this value |
view.field | Include this specific field |
-view.field | Exclude this specific field |
Exclusions (prefixed with -) must be applied after all inclusions are resolved. The process is:
fields in order:
view.*, view.field, tag:x) → add matching fields to the set-view.field) → remove that field from the set, even if it was added by a wildcardExample:
fields:
- ecomm__order_items.* # include all order_items fields
- ecomm__users.country # include this one users field
- -ecomm__order_items.cost # remove cost — excluded even though * was used above
- -ecomm__order_items.raw_json # remove raw_json — same reason
Result: all order_items fields except cost and raw_json, plus users.country.
⚠️ Critical: A
-exclusion always wins. Never include a field that has been explicitly excluded, regardless of what wildcard included it.
Using the join hierarchy from Step 3 and the relationships.yaml fetched in Step 2c, map each join to a Snowflake Semantic View relationship.
Each entry in relationships.yaml looks like:
- join_from_view: ecomm__order_items
join_to_view: ecomm__inventory_items
join_type: always_left
on_sql: ${ecomm__order_items.inventory_item_id} = ${ecomm__inventory_items.id}
relationship_type: assumed_many_to_one
The on_sql field tells you the join columns. Extract the column names to populate relationship_columns in the output.
Available relationship parameters:
| Parameter | Description |
|---|---|
join_from_view | Source view the join originates from |
join_to_view | Target view being joined to |
join_type | SQL join type (e.g. always_left) |
on_sql | SQL condition — extract column names from this |
relationship_type | Cardinality (e.g. assumed_many_to_one) |
reversible | Whether the join is bi-directional |
where_sql | Additional WHERE clause when join is active |
For each view in the resolved field list, translate its Omni field definitions into Semantic View entries.
⚠️ Only translate fields that survived the Step 4 inclusion/exclusion resolution. Do not add fields that were excluded.
dimensions or time_dimensionsThe field name becomes the dimension name unless a label is defined. Carry description and synonyms directly.
Standard dimension:
# Omni view YAML
city:
sql: '"CITY"'
label: City
description: Customer's city
# Semantic View output
- name: city
expr: CITY
description: Customer's city
data_type: TEXT
Date/timestamp dimension → use time_dimensions:
# Omni
created_at:
sql: '"CREATED_AT"'
type: time
label: Created At
# Semantic View output
time_dimensions:
- name: created_at
expr: CREATED_AT
data_type: TIMESTAMP
Group dimension → translate to a CASE WHEN expression:
# Omni
device_type_groups:
sql: ${device_type}
label: Device Type Groups
groups:
- filter:
is: [ mobile, tablet ]
name: Handheld
- filter:
is: desktop
name: Desktop
else: Other
-- expr value
CASE
WHEN "DEVICE_TYPE" IN ('mobile', 'tablet') THEN 'Handheld'
WHEN "DEVICE_TYPE" = 'desktop' THEN 'Desktop'
ELSE 'Other'
END
Bin dimension → translate to a CASE WHEN range expression:
# Omni
age_bin:
sql: ${age}
bin_boundaries: [ 18, 35, 50, 65 ]
-- expr value
CASE
WHEN "AGE" < 18 THEN 'below 18'
WHEN "AGE" >= 18 AND "AGE" < 35 THEN '>= 18 and < 35'
WHEN "AGE" >= 35 AND "AGE" < 50 THEN '>= 35 and < 50'
WHEN "AGE" >= 50 AND "AGE" < 65 THEN '>= 50 and < 65'
WHEN "AGE" >= 65 THEN '65 and above'
ELSE NULL
END
Duration dimension → translate to a TIMESTAMPDIFF expression:
# Omni
fulfillment_days:
duration:
sql_start: ${created_at[date]}
sql_end: ${delivered_at[date]}
intervals: [ days ]
-- expr value (days)
CASE
WHEN TIMESTAMPADD(DAY, 1 * TIMESTAMPDIFF(DAY, DATE_TRUNC('DAY', "CREATED_AT"), DATE_TRUNC('DAY', "DELIVERED_AT")), DATE_TRUNC('DAY', "CREATED_AT"))
<= DATE_TRUNC('DAY', "DELIVERED_AT")
THEN TIMESTAMPDIFF(DAY, DATE_TRUNC('DAY', "CREATED_AT"), DATE_TRUNC('DAY', "DELIVERED_AT"))
ELSE TIMESTAMPDIFF(DAY, DATE_TRUNC('DAY', "CREATED_AT"), DATE_TRUNC('DAY', "DELIVERED_AT")) - 1
END
Boolean dimension → becomes a named filter (not a dimension):
# Omni
is_returned:
sql: '"IS_RETURNED"'
description: Whether the item was returned
completed_orders:
sql: ${status} = 'Complete'
label: Completed Orders
# Semantic View output — goes in filters, not dimensions
filters:
- name: is_returned
expr: IS_RETURNED
description: Whether the item was returned
- name: completed_orders
expr: STATUS = 'Complete'
description: Completed Orders
metricsThe sql field references the source column and aggregate_type defines the aggregation.
Standard measure:
# Omni
total_sale_price:
sql: ${sale_price}
aggregate_type: sum
label: Total Sale Price
description: Total revenue of orders
synonyms: [ Total Revenue, Total Receipts ]
# Semantic View output
metrics:
- name: total_sale_price
expr: COALESCE(SUM("SALE_PRICE"), 0)
description: Total revenue of orders
synonyms: [ Total Revenue, Total Receipts ]
Derived measure (no aggregate_type) — references other measures:
# Omni
gross_margin:
sql: ${total_sale_price} - ${total_cost}
label: Gross Margin
# Semantic View output — top-level derived metric
metrics:
- name: gross_margin
expr: total_sale_price - total_cost
Filtered measure → wraps in CASE WHEN:
# Omni
california_revenue:
sql: ${sale_price}
aggregate_type: sum
filters:
users.state:
is: California
-- expr value
COALESCE(SUM(CASE WHEN "users"."STATE" = 'California' THEN "SALE_PRICE" ELSE NULL END), 0)
Array and boolean filter variants:
# is: [New York, New Jersey] → "STATE" IN ('New York', 'New Jersey')
# is: true → field IS TRUE
module_custom_instructionsIf the topic has an ai_context parameter, include it as:
module_custom_instructions:
question_categorization: <ai_context value>
verified_queriesConvert each entry under sample_queries into a SQL statement for verified_queries. Use the prompt as the question and the description as context for writing the SQL.
✋ STOP — Review all dimensions, measures, and relationships with the user before generating the final output.
name: <name>
description: <string>
tables:
- name: <name>
description: <string>
base_table:
database: <database>
schema: <schema>
table: <table name>
dimensions:
- name: <name>
synonyms: [ <string>, ... ]
description: <string>
expr: <SQL expression>
data_type: <data type>
unique: <boolean>
time_dimensions:
- name: <name>
synonyms: [ <string>, ... ]
description: <string>
expr: <SQL expression>
data_type: <data type>
facts:
- name: <name>
synonyms: [ <string>, ... ]
description: <string>
expr: <SQL expression>
data_type: <data type>
metrics:
- name: <name>
synonyms: [ <string>, ... ]
description: <string>
expr: <SQL expression>
filters:
- name: <name>
synonyms: [ <string>, ... ]
description: <string>
expr: <SQL expression>
relationships:
- name: <string>
left_table: <table>
right_table: <table>
relationship_columns:
- left_column: <column>
right_column: <column>
metrics:
- name: <name>
synonyms: [ <string>, ... ]
description: <string>
expr: <SQL expression>
verified_queries:
- name: <string>
question: <string>
sql: <string>
use_as_onboarding_question: <boolean>
Valid top-level keys only:
name,description,tables,relationships,metrics,verified_queries,module_custom_instructions
The SQL is the same regardless of environment. The difference is how it is executed.
SQL to run:
CALL SYSTEM$CREATE_SEMANTIC_VIEW_FROM_YAML('<database>.<schema>', $$
<yaml content here>
$$);
Execution by environment:
| Environment | Command |
|---|---|
| Cortex Code | Execute the SQL directly in the active Snowflake session |
snow CLI | snow sql -q "CALL SYSTEM$CREATE_SEMANTIC_VIEW_FROM_YAML(...)" or write YAML to a temp file and pipe it |
| SnowSQL | snowsql -a <account> -u <user> -f <sql_file.sql> |
| Python | conn.cursor().execute("<sql>") |
Tip for CLI environments: If the YAML is long, write it to a temporary
.sqlfile first and execute the file rather than passing it inline — this avoids shell escaping issues with the$$dollar-quoting.
# snow CLI example with a file
snow sql -f /tmp/create_semantic_view.sql
GRANT SELECT ON SEMANTIC VIEW <database>.<schema>.<name> TO ROLE <role>;
Execute this the same way as the CREATE call above, using whichever connection method was established during the Runtime Environment Detection step.