From wire
This skill enables Claude Code to create and modify LookML content (views, explores, models) directly in the local filesystem. Claude Code works with LookML projects stored in git repositories, typically in a `/looker` directory structure. The user provides specifications, schema information, and requirements; Claude Code generates properly formatted, validated LookML files.
npx claudepluginhub rittmananalytics/wire-plugin --plugin wireThis skill uses the workspace's default tool permissions.
This skill enables Claude Code to create and modify LookML content (views, explores, models) directly in the local filesystem. Claude Code works with LookML projects stored in git repositories, typically in a `/looker` directory structure. The user provides specifications, schema information, and requirements; Claude Code generates properly formatted, validated LookML files.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
This skill enables Claude Code to create and modify LookML content (views, explores, models) directly in the local filesystem. Claude Code works with LookML projects stored in git repositories, typically in a /looker directory structure. The user provides specifications, schema information, and requirements; Claude Code generates properly formatted, validated LookML files.
Claude Code operates entirely on the local filesystem:
| Operation | Approach |
|---|---|
| Read existing LookML files | cat, view filesystem commands |
| Write new LookML files | write, create filesystem commands |
| Modify existing files | edit, str_replace commands |
| Understand schema | User-provided spec files, YAML configs, or documentation |
| Validate syntax | LookML linting rules applied during generation |
| Version control | Standard git operations |
Note: Claude Code does NOT have access to:
All schema information must be provided by the user via spec files, documentation, or direct instruction.
LookML projects typically follow this structure within the repository:
/looker/
├── manifest.lkml # Project manifest
├── models/
│ ├── analytics.model.lkml # Model definitions
│ └── marketing.model.lkml
├── views/
│ ├── core/ # Core/shared views
│ │ ├── dim_customer.view.lkml
│ │ └── dim_product.view.lkml
│ ├── staging/ # Staging layer views
│ │ └── stg_orders.view.lkml
│ └── marts/ # Business logic views
│ └── fct_orders.view.lkml
├── explores/ # Explore definitions (optional)
│ └── orders.explore.lkml
├── dashboards/ # LookML dashboards
│ └── executive_summary.dashboard.lkml
└── docs/ # Documentation and specs
├── schema.yml # Schema definitions
└── field_specs.md # Field specifications
Activate this skill when the user requests:
Every view must connect to a real table or derived query. Never use placeholder or mock data.
❌ FORBIDDEN PATTERN:
view: employee_pto {
derived_table: {
sql:
SELECT 'Alice' as name, 5 as days
UNION ALL
SELECT 'Bob' as name, 3 as days
;;
}
}
✅ REQUIRED PATTERN:
view: employee_pto {
sql_table_name: `project_id.dataset.employee_pto` ;;
}
Column names in LookML must exactly match the source table columns (case-sensitive for most databases). Always verify column names from the provided schema.
Before creating new files, examine existing LookML in the project to understand:
dim_, fct_)Ensure all generated LookML:
{}type for dimensions);; after SQL blocksAdd comments explaining:
Claude Code relies on user-provided information for schema details:
YAML or JSON files describing tables and columns:
# schema.yml
tables:
- name: employee_pto
database: ra-development
schema: analytics_seed
columns:
- name: First_name
type: STRING
description: Employee's first name
- name: Last_name
type: STRING
description: Employee's last name
- name: email
type: STRING
description: Employee email address
- name: Start_date
type: DATE
description: PTO start date
- name: End_date
type: DATE
description: PTO end date
- name: Days
type: FLOAT64
description: Number of PTO days
- name: Type
type: STRING
description: Type of PTO (vacation, sick, etc.)
Convert dbt schema.yml to LookML:
# dbt schema.yml
models:
- name: stg_orders
description: Staged orders data
columns:
- name: order_id
description: Primary key
tests:
- unique
- not_null
- name: customer_id
description: Foreign key to customers
- name: order_date
description: Date order was placed
- name: total_amount
description: Order total in USD
User provides table details in natural language or structured format within the conversation.
Examine existing views to understand patterns and relationships.
For every LookML request, extract:
/looker/)# List project structure
find /looker -name "*.lkml" | head -20
# Read the model file to understand existing setup
cat /looker/models/analytics.model.lkml
# Study existing view patterns
head -100 /looker/views/core/dim_customer.view.lkml
Key things to identify:
include: "/views/**/*.view")Extract from user-provided specs:
# From schema.yml or user instructions, identify:
table_name = "employee_pto"
full_table_path = "`ra-development.analytics_seed.employee_pto`"
columns = [
{"name": "First_name", "type": "STRING"},
{"name": "Last_name", "type": "STRING"},
{"name": "email", "type": "STRING"},
{"name": "Start_date", "type": "DATE"},
{"name": "End_date", "type": "DATE"},
{"name": "Days", "type": "FLOAT64"},
{"name": "Type", "type": "STRING"},
]
Use sql_table_name for direct table access:
view: employee_pto {
sql_table_name: `ra-development.analytics_seed.employee_pto` ;;
}
Use derived_table for transformations:
view: employee_pto_summary {
derived_table: {
sql:
SELECT
email,
SUM(Days) AS total_days
FROM `ra-development.analytics_seed.employee_pto`
GROUP BY 1
;;
}
}
| Source Type | LookML Type | Notes |
|---|---|---|
| STRING, VARCHAR | type: string | |
| INT64, INTEGER | type: number | |
| FLOAT64, NUMERIC | type: number | Add value_format |
| DATE | type: time with datatype: date | Use dimension_group |
| TIMESTAMP, DATETIME | type: time with datatype: timestamp | Use dimension_group |
| BOOLEAN | type: yesno | |
| ARRAY | type: string | Use ARRAY_TO_STRING() |
| STRUCT | Access with dot notation | ${TABLE}.struct.field |
Write complete, properly formatted LookML:
view: employee_pto {
sql_table_name: `ra-development.analytics_seed.employee_pto` ;;
# =============================================================================
# PRIMARY KEY
# =============================================================================
dimension: pto_id {
primary_key: yes
type: string
sql: CONCAT(${TABLE}.email, '-', CAST(${TABLE}.Start_date AS STRING)) ;;
hidden: yes
description: "Composite key: email + start date"
}
# =============================================================================
# DIMENSIONS - STRING
# =============================================================================
dimension: first_name {
type: string
label: "First Name"
sql: ${TABLE}.First_name ;;
group_label: "Employee Details"
}
dimension: last_name {
type: string
label: "Last Name"
sql: ${TABLE}.Last_name ;;
group_label: "Employee Details"
}
dimension: employee_name {
type: string
label: "Employee Name"
sql: CONCAT(${TABLE}.First_name, ' ', ${TABLE}.Last_name) ;;
group_label: "Employee Details"
}
dimension: email {
type: string
sql: ${TABLE}.email ;;
group_label: "Employee Details"
}
dimension: pto_type {
type: string
label: "PTO Type"
sql: ${TABLE}.Type ;;
description: "Category of time off: vacation, sick, personal, etc."
}
# =============================================================================
# DIMENSIONS - DATE/TIME
# =============================================================================
dimension_group: pto_start {
type: time
label: "PTO Start"
timeframes: [raw, date, week, month, quarter, year]
convert_tz: no
datatype: date
sql: ${TABLE}.Start_date ;;
}
dimension_group: pto_end {
type: time
label: "PTO End"
timeframes: [raw, date, week, month, quarter, year]
convert_tz: no
datatype: date
sql: ${TABLE}.End_date ;;
}
# =============================================================================
# DIMENSIONS - NUMERIC
# =============================================================================
dimension: pto_days {
type: number
label: "PTO Days"
sql: ${TABLE}.Days ;;
value_format_name: decimal_1
description: "Number of days for this PTO request"
}
# =============================================================================
# DIMENSIONS - DERIVED/CALCULATED
# =============================================================================
dimension: is_extended_leave {
type: yesno
label: "Extended Leave (5+ Days)"
sql: ${pto_days} >= 5 ;;
description: "Flag for PTO requests of 5 or more days"
}
dimension: pto_days_tier {
type: tier
label: "PTO Days Tier"
tiers: [1, 3, 5, 10]
style: integer
sql: ${pto_days} ;;
}
# =============================================================================
# MEASURES
# =============================================================================
measure: count {
type: count
label: "PTO Request Count"
drill_fields: [detail*]
}
measure: total_pto_days {
type: sum
label: "Total PTO Days"
sql: ${pto_days} ;;
value_format_name: decimal_1
}
measure: average_pto_days {
type: average
label: "Average PTO Days"
sql: ${pto_days} ;;
value_format_name: decimal_2
}
measure: employee_count {
type: count_distinct
label: "Employee Count"
sql: ${email} ;;
description: "Distinct count of employees with PTO"
}
# =============================================================================
# DRILL SETS
# =============================================================================
set: detail {
fields: [
employee_name,
email,
pto_start_date,
pto_end_date,
pto_days,
pto_type
]
}
}
Add the view to an explore in the model:
# In models/analytics.model.lkml
connection: "ra_dw_prod"
include: "/views/**/*.view.lkml"
# Add new explore
explore: employee_pto {
label: "Employee PTO"
group_label: "HR Analytics"
description: "Employee paid time off tracking and analysis"
# Join to employee dimension if available
join: employees_dim {
type: left_outer
relationship: many_to_one
sql_on: ${employee_pto.email} = ${employees_dim.email} ;;
}
}
Before finalizing any LookML file, verify:
{} are balanced;;type: specifiedsql: references use ${TABLE}.column or ${view.field} syntax# ❌ WRONG: Missing type
dimension: name {
sql: ${TABLE}.name ;;
}
# ✅ CORRECT
dimension: name {
type: string
sql: ${TABLE}.name ;;
}
# ❌ WRONG: Missing semicolons after SQL
dimension: name {
type: string
sql: ${TABLE}.name
}
# ✅ CORRECT
dimension: name {
type: string
sql: ${TABLE}.name ;;
}
# ❌ WRONG: Unbalanced braces
view: test {
dimension: id {
type: number
sql: ${TABLE}.id ;;
}
# ✅ CORRECT
view: test {
dimension: id {
type: number
sql: ${TABLE}.id ;;
}
}
After creating LookML files, provide a summary:
## LookML Changes Summary
### Files Created/Modified
1. **Created**: `/looker/views/hr/employee_pto.view.lkml`
- Source table: `ra-development.analytics_seed.employee_pto`
- Dimensions: 8 (including composite primary key)
- Measures: 4
- Drill set defined for detail exploration
2. **Modified**: `/looker/models/analytics.model.lkml`
- Added `employee_pto` explore
- Configured join to `employees_dim` view
### Next Steps for User
1. **Review the generated LookML** for accuracy against your schema
2. **Commit changes to git**:
```bash
git add looker/
git commit -m "feat: Add employee PTO view and explore"
git push
## Common Patterns
### Pattern 1: Dimension Table (Slowly Changing)
```lkml
view: dim_customer {
sql_table_name: `project.dataset.dim_customer` ;;
dimension: customer_id {
primary_key: yes
type: number
sql: ${TABLE}.customer_id ;;
hidden: yes
}
dimension: customer_name {
type: string
sql: ${TABLE}.customer_name ;;
}
dimension: email {
type: string
sql: ${TABLE}.email ;;
}
dimension: customer_segment {
type: string
sql: ${TABLE}.segment ;;
}
dimension: is_active {
type: yesno
sql: ${TABLE}.is_active ;;
}
dimension_group: created {
type: time
timeframes: [date, month, year]
datatype: date
sql: ${TABLE}.created_date ;;
}
measure: count {
type: count
}
measure: active_customer_count {
type: count
filters: [is_active: "yes"]
}
}
view: fct_orders {
sql_table_name: `project.dataset.fct_orders` ;;
dimension: order_id {
primary_key: yes
type: number
sql: ${TABLE}.order_id ;;
}
dimension: customer_id {
type: number
sql: ${TABLE}.customer_id ;;
hidden: yes
}
dimension: product_id {
type: number
sql: ${TABLE}.product_id ;;
hidden: yes
}
dimension_group: order {
type: time
timeframes: [raw, time, date, week, month, quarter, year]
datatype: timestamp
sql: ${TABLE}.order_timestamp ;;
}
dimension: order_amount {
type: number
sql: ${TABLE}.order_amount ;;
value_format_name: usd
hidden: yes
}
dimension: quantity {
type: number
sql: ${TABLE}.quantity ;;
hidden: yes
}
# Measures
measure: count {
type: count
drill_fields: [order_id, order_date, order_amount]
}
measure: total_revenue {
type: sum
sql: ${order_amount} ;;
value_format_name: usd
}
measure: average_order_value {
type: average
sql: ${order_amount} ;;
value_format_name: usd
}
measure: total_quantity {
type: sum
sql: ${quantity} ;;
}
measure: order_count {
type: count_distinct
sql: ${order_id} ;;
}
}
view: customer_order_summary {
derived_table: {
sql:
SELECT
customer_id,
COUNT(DISTINCT order_id) AS lifetime_orders,
SUM(order_amount) AS lifetime_value,
MIN(order_timestamp) AS first_order_date,
MAX(order_timestamp) AS last_order_date,
DATE_DIFF(CURRENT_DATE(), DATE(MAX(order_timestamp)), DAY) AS days_since_last_order
FROM `project.dataset.fct_orders`
GROUP BY 1
;;
# PDT configuration
datagroup_trigger: daily_refresh
indexes: ["customer_id"]
}
dimension: customer_id {
primary_key: yes
type: number
sql: ${TABLE}.customer_id ;;
hidden: yes
}
dimension: lifetime_orders {
type: number
sql: ${TABLE}.lifetime_orders ;;
}
dimension: lifetime_value {
type: number
sql: ${TABLE}.lifetime_value ;;
value_format_name: usd
}
dimension: lifetime_value_tier {
type: tier
tiers: [0, 100, 500, 1000, 5000]
style: integer
sql: ${lifetime_value} ;;
}
dimension_group: first_order {
type: time
timeframes: [date, month, year]
datatype: timestamp
sql: ${TABLE}.first_order_date ;;
}
dimension_group: last_order {
type: time
timeframes: [date, month, year]
datatype: timestamp
sql: ${TABLE}.last_order_date ;;
}
dimension: days_since_last_order {
type: number
sql: ${TABLE}.days_since_last_order ;;
}
dimension: is_repeat_customer {
type: yesno
sql: ${lifetime_orders} > 1 ;;
}
measure: average_lifetime_value {
type: average
sql: ${lifetime_value} ;;
value_format_name: usd
}
measure: average_lifetime_orders {
type: average
sql: ${lifetime_orders} ;;
value_format_name: decimal_1
}
}
explore: orders {
label: "Orders Analysis"
description: "Analyze orders with customer, product, and geographic context"
# Base view
from: fct_orders
# Customer dimension
join: dim_customer {
type: left_outer
relationship: many_to_one
sql_on: ${fct_orders.customer_id} = ${dim_customer.customer_id} ;;
}
# Product dimension
join: dim_product {
type: left_outer
relationship: many_to_one
sql_on: ${fct_orders.product_id} = ${dim_product.product_id} ;;
}
# Customer lifetime metrics
join: customer_order_summary {
type: left_outer
relationship: one_to_one
sql_on: ${fct_orders.customer_id} = ${customer_order_summary.customer_id} ;;
}
# Always filter to completed orders (optional)
always_filter: {
filters: [fct_orders.order_status: "completed"]
}
}
view: dynamic_date_comparison {
derived_table: {
explore_source: orders {
column: order_date { field: fct_orders.order_date }
column: total_revenue { field: fct_orders.total_revenue }
column: order_count { field: fct_orders.order_count }
bind_filters: {
from_field: dynamic_date_comparison.date_filter
to_field: fct_orders.order_date
}
}
}
filter: date_filter {
type: date
}
dimension: order_date {
type: date
sql: ${TABLE}.order_date ;;
}
measure: total_revenue {
type: sum
sql: ${TABLE}.total_revenue ;;
value_format_name: usd
}
measure: order_count {
type: sum
sql: ${TABLE}.order_count ;;
}
}
view: events {
sql_table_name: `project.dataset.events` ;;
# Unnest repeated field
dimension: event_param_key {
type: string
sql: ep.key ;;
}
dimension: event_param_value {
type: string
sql: ep.value.string_value ;;
}
}
# In the explore, use UNNEST
explore: events {
join: event_params {
type: left_outer
relationship: one_to_many
sql: LEFT JOIN UNNEST(${events.event_params}) AS ep ;;
}
}
view: partitioned_events {
sql_table_name: `project.dataset.events` ;;
# Always include partition filter for performance
dimension_group: event {
type: time
timeframes: [raw, date, week, month]
datatype: timestamp
sql: ${TABLE}._PARTITIONTIME ;;
}
}
explore: partitioned_events {
# Require partition filter
always_filter: {
filters: [partitioned_events.event_date: "last 30 days"]
}
}
dimension: metadata_source {
type: string
sql: JSON_EXTRACT_SCALAR(${TABLE}.metadata, '$.source') ;;
}
dimension: metadata_version {
type: number
sql: CAST(JSON_EXTRACT_SCALAR(${TABLE}.metadata, '$.version') AS INT64) ;;
}
- dashboard: executive_summary
title: "Executive Summary"
layout: newspaper
preferred_viewer: dashboards-next
description: "Key business metrics overview"
filters:
- name: date_range
title: "Date Range"
type: date_filter
default_value: "last 30 days"
allow_multiple_values: false
elements:
- title: "Total Revenue"
name: total_revenue_tile
model: analytics
explore: orders
type: single_value
fields: [fct_orders.total_revenue]
listen:
date_range: fct_orders.order_date
row: 0
col: 0
width: 6
height: 4
- title: "Revenue Over Time"
name: revenue_trend
model: analytics
explore: orders
type: looker_line
fields: [fct_orders.order_date, fct_orders.total_revenue]
sorts: [fct_orders.order_date]
listen:
date_range: fct_orders.order_date
row: 4
col: 0
width: 12
height: 8
- title: "Revenue by Segment"
name: revenue_by_segment
model: analytics
explore: orders
type: looker_pie
fields: [dim_customer.customer_segment, fct_orders.total_revenue]
sorts: [fct_orders.total_revenue desc]
listen:
date_range: fct_orders.order_date
row: 4
col: 12
width: 12
height: 8
Before finalizing any LookML work, verify:
{} are balanced;;type: specifiedprimary_key: yeshidden: yesdimension_group with appropriate timeframesdatatype: (date vs timestamp)convert_tz: no for date-only fieldsrelationship: definedThis phase is REQUIRED before marking any LookML work as complete.
After generating LookML, you MUST cross-reference all SQL table and column names against the source schema file (e.g., target_warehouse_ddl.sql, schema.yml, or other provided schema documentation).
# 1. Extract all sql_table_name references from generated LookML
grep -h "sql_table_name:" /looker/views/**/*.view.lkml
# 2. Extract all ${TABLE}.column references
grep -oE '\$\{TABLE\}\.[a-zA-Z_][a-zA-Z0-9_]*' /looker/views/new_view.view.lkml | sort -u
# 3. Compare against the DDL or schema file
cat target_warehouse_ddl.sql
For every view created or modified, verify:
sql_table_name or FROM clause references a table that exists in the DDL/schemaproject.dataset.table matches the actual structure${TABLE}.column_name reference matches an actual column in the source tableGiven this DDL:
-- target_warehouse_ddl.sql
CREATE TABLE `ra-development.analytics_seed.employee_pto` (
First_name STRING,
Last_name STRING,
email STRING,
Start_date DATE,
End_date DATE,
Days FLOAT64,
Type STRING
);
Validate the LookML references:
| LookML Reference | DDL Column | Status |
|---|---|---|
${TABLE}.First_name | First_name STRING | ✅ Match |
${TABLE}.Last_name | Last_name STRING | ✅ Match |
${TABLE}.email | email STRING | ✅ Match |
${TABLE}.Start_date | Start_date DATE | ✅ Match |
${TABLE}.first_name | - | ❌ Case mismatch! Should be First_name |
${TABLE}.pto_type | - | ❌ Column doesn't exist! Should be Type |
If validation reveals mismatches:
If any view, explore, or dashboard uses preferred_slug, it MUST comply with these rules:
| Rule | Requirement |
|---|---|
| Maximum length | 255 characters |
| Allowed characters | Letters (A-Z, a-z), numbers (0-9), dashes (-), underscores (_) |
| NOT allowed | Spaces, special characters, unicode, dots, slashes |
# ✅ VALID preferred_slug values
explore: orders {
preferred_slug: "orders-analysis"
}
view: customer_metrics {
preferred_slug: "customer_metrics_v2"
}
dashboard: executive_summary {
preferred_slug: "exec-summary-2024"
}
explore: revenue {
preferred_slug: "revenue_by_region_q4_2024"
}
# ❌ INVALID - contains spaces
preferred_slug: "orders analysis"
# ❌ INVALID - contains dots
preferred_slug: "orders.analysis"
# ❌ INVALID - contains special characters
preferred_slug: "orders@analysis!"
# ❌ INVALID - exceeds 255 characters
preferred_slug: "this_is_a_very_long_slug_that_goes_on_and_on_and_exceeds_the_maximum_allowed_length_of_two_hundred_and_fifty_five_characters_which_is_the_limit_set_by_looker_for_preferred_slug_parameters_so_this_will_fail_validation_when_you_try_to_deploy_it"
Use this pattern to validate preferred_slug values:
^[A-Za-z0-9_-]{1,255}$
# Extract all preferred_slug values and validate
grep -h "preferred_slug:" /looker/**/*.lkml | while read line; do
slug=$(echo "$line" | sed 's/.*preferred_slug: *"\([^"]*\)".*/\1/')
# Check length
if [ ${#slug} -gt 255 ]; then
echo "ERROR: preferred_slug exceeds 255 chars: $slug"
fi
# Check characters
if ! echo "$slug" | grep -qE '^[A-Za-z0-9_-]+$'; then
echo "ERROR: preferred_slug contains invalid characters: $slug"
fi
done
Before completing any LookML task, provide a validation summary:
## Validation Summary
### Table/Column Reference Check
- **Schema source**: `target_warehouse_ddl.sql`
- **Tables validated**: 3
- **Columns validated**: 24
- **Status**: ✅ All references valid
| View | Table | Columns Checked | Status |
|------|-------|-----------------|--------|
| employee_pto | `ra-development.analytics_seed.employee_pto` | 7 | ✅ Valid |
| employee_summary | derived_table | 4 | ✅ Valid |
### preferred_slug Validation
- **Slugs found**: 2
- **Status**: ✅ All valid
| Location | Slug | Length | Characters | Status |
|----------|------|--------|------------|--------|
| explore: employee_pto | `employee-pto-analysis` | 21 | ✅ | ✅ Valid |
| dashboard: hr_overview | `hr-overview-2024` | 16 | ✅ | ✅ Valid |
### Issues Found
- None
### Recommendations
- All LookML is ready for commit and Looker validation
| Error Pattern | Likely Cause | Solution |
|---|---|---|
| "Unknown field" | Column name mismatch | Verify exact column name from schema |
| "Circular reference" | Field references itself | Check dimension SQL references |
| "Missing }" | Unbalanced braces | Count and match all { and } |
| "Invalid SQL" | Missing ;; | Add ;; after SQL blocks |
| "Duplicate field" | Same name in view | Rename or remove duplicate |
After creating files, user should run in Looker IDE:
This skill enables Claude Code to:
NEVER mark LookML work as complete without:
sql_table_name references against the DDL/schema${TABLE}.column reference exists with correct casingpreferred_slug values meet syntax rules (alphanumeric, dashes, underscores only; max 255 chars)Always examine existing project files first, follow established conventions, validate against source schema, and provide comprehensive summaries of changes made.