From data
Profile and explore datasets to understand their shape, quality, and patterns before analysis. Use when encountering a new dataset, assessing data quality, discovering column distributions, identifying nulls and outliers, or deciding which dimensions to analyze.
npx claudepluginhub tmorrowdev/data-plugin --plugin dataThis skill uses the workspace's default tool permissions.
Systematic methodology for profiling datasets, assessing data quality, discovering patterns, and understanding schemas.
Guides Payload CMS config (payload.config.ts), collections, fields, hooks, access control, APIs. Debugs validation errors, security, relationships, queries, transactions, hook behavior.
Builds scalable data pipelines, modern data warehouses, and real-time streaming architectures using Spark, dbt, Airflow, Kafka, and cloud platforms like Snowflake, BigQuery.
Builds production Apache Airflow DAGs with best practices for operators, sensors, testing, and deployment. For data pipelines, workflow orchestration, and batch job scheduling.
Systematic methodology for profiling datasets, assessing data quality, discovering patterns, and understanding schemas.
Before analyzing any data, understand its structure:
Table-level questions:
Column classification: Categorize each column as one of:
For each column, compute:
All columns:
Numeric columns (metrics):
min, max, mean, median (p50)
standard deviation
percentiles: p1, p5, p25, p75, p95, p99
zero count
negative count (if unexpected)
String columns (dimensions, text):
min length, max length, avg length
empty string count
pattern analysis (do values follow a format?)
case consistency (all upper, all lower, mixed?)
leading/trailing whitespace count
Date/timestamp columns:
min date, max date
null dates
future dates (if unexpected)
distribution by month/week
gaps in time series
Boolean columns:
true count, false count, null count
true rate
After profiling individual columns:
Rate each column:
Look for:
Red flags that suggest accuracy issues:
For numeric columns, characterize the distribution:
For time series data, look for:
Identify natural segments by:
Between numeric columns:
When documenting a dataset for team use:
## Table: [schema.table_name]
**Description**: [What this table represents]
**Grain**: [One row per...]
**Primary Key**: [column(s)]
**Row Count**: [approximate, with date]
**Update Frequency**: [real-time / hourly / daily / weekly]
**Owner**: [team or person responsible]
### Key Columns
| Column | Type | Description | Example Values | Notes |
|--------|------|-------------|----------------|-------|
| user_id | STRING | Unique user identifier | "usr_abc123" | FK to users.id |
| event_type | STRING | Type of event | "click", "view", "purchase" | 15 distinct values |
| revenue | DECIMAL | Transaction revenue in USD | 29.99, 149.00 | Null for non-purchase events |
| created_at | TIMESTAMP | When the event occurred | 2024-01-15 14:23:01 | Partitioned on this column |
### Relationships
- Joins to `users` on `user_id`
- Joins to `products` on `product_id`
- Parent of `event_details` (1:many on event_id)
### Known Issues
- [List any known data quality issues]
- [Note any gotchas for analysts]
### Common Query Patterns
- [Typical use cases for this table]
Use these Snowflake-specific patterns to discover schema:
-- List all databases
SHOW DATABASES;
-- List schemas in a database
SHOW SCHEMAS IN DATABASE my_database;
-- List all tables in a schema
SHOW TABLES IN SCHEMA my_database.my_schema;
-- Column details for a table
DESCRIBE TABLE my_database.my_schema.my_table;
-- Column metadata via INFORMATION_SCHEMA
SELECT column_name, data_type, is_nullable, column_default, comment
FROM my_database.INFORMATION_SCHEMA.COLUMNS
WHERE table_schema = 'MY_SCHEMA'
AND table_name = 'MY_TABLE'
ORDER BY ordinal_position;
-- Table sizes and row counts
SELECT table_schema, table_name, row_count, bytes,
ROUND(bytes / 1024 / 1024, 2) as size_mb
FROM my_database.INFORMATION_SCHEMA.TABLES
WHERE table_schema = 'MY_SCHEMA'
ORDER BY bytes DESC;
-- Table freshness (check when data was last loaded)
SELECT MAX(updated_at) as last_updated
FROM my_database.my_schema.my_table;
-- Clustering info (important for query performance)
SHOW TABLES LIKE 'MY_TABLE' IN SCHEMA my_database.my_schema;
-- Check cluster_by column in output
SageMaker profiling: For deeper exploration in SageMaker Studio, generate notebook cells using pandas-profiling:
import pandas as pd
from ydata_profiling import ProfileReport
# Load a sample from Snowflake
df = pd.read_sql("SELECT * FROM my_table SAMPLE (10000 ROWS)", conn)
# Generate profile report
profile = ProfileReport(df, title="Data Profile: my_table", minimal=True)
profile.to_notebook_iframe()
When exploring an unfamiliar data environment: