From data
Generate or improve a company-specific data analysis skill by extracting tribal knowledge from analysts. BOOTSTRAP MODE - Triggers: "Create a data context skill", "Set up data analysis for our warehouse", "Help me create a skill for our database", "Generate a data skill for [company]" → Discovers schemas, asks key questions, generates initial skill with reference files ITERATION MODE - Triggers: "Add context about [domain]", "The skill needs more info about [topic]", "Update the data skill with [metrics/tables/terminology]", "Improve the [domain] reference" → Loads existing skill, asks targeted questions, appends/updates reference files Use when data analysts want Claude to understand their company's specific data warehouse, terminology, metrics definitions, and common query patterns.
npx claudepluginhub tmorrowdev/tmorrow_ai --plugin dataThis skill uses the workspace's default tool permissions.
A meta-skill that extracts company-specific data knowledge from analysts and generates tailored data analysis skills.
Guides Payload CMS config (payload.config.ts), collections, fields, hooks, access control, APIs. Debugs validation errors, security, relationships, queries, transactions, hook behavior.
Builds scalable data pipelines, modern data warehouses, and real-time streaming architectures using Spark, dbt, Airflow, Kafka, and cloud platforms like Snowflake, BigQuery.
Builds production Apache Airflow DAGs with best practices for operators, sensors, testing, and deployment. For data pipelines, workflow orchestration, and batch job scheduling.
A meta-skill that extracts company-specific data knowledge from analysts and generates tailored data analysis skills.
This skill has two modes:
Use when: User wants to create a new data context skill for their warehouse.
Step 1: Connect to Snowflake
The team uses Snowflake as the primary data warehouse. Use the Snowflake MCP tools (query and schema) to connect. If unclear, check available MCP tools in the current session.
Ask the user which Snowflake database, warehouse, and role to use if not obvious from context.
Step 2: Explore the schema
Use Snowflake schema tools to:
Snowflake exploration queries:
-- List all databases
SHOW DATABASES;
-- List schemas in a database
SHOW SCHEMAS IN DATABASE my_database;
-- List tables in a schema
SHOW TABLES IN SCHEMA my_database.my_schema;
-- Get column details
DESCRIBE TABLE my_database.my_schema.my_table;
-- Get row counts and freshness
SELECT COUNT(*) as row_count,
MAX(updated_at) as last_updated
FROM my_database.my_schema.my_table;
-- Preview sample data
SELECT * FROM my_database.my_schema.my_table LIMIT 100;
SageMaker notebook output: When generating skills, include SageMaker Studio-ready Python cells that analysts can paste directly into notebooks. Use snowflake-connector-python for Snowflake access from SageMaker:
import snowflake.connector
import pandas as pd
# Connect to Snowflake from SageMaker
conn = snowflake.connector.connect(
account='<account>',
authenticator='externalbrowser', # or use AWS Secrets Manager
warehouse='<warehouse>',
database='<database>',
schema='<schema>'
)
df = pd.read_sql("SELECT * FROM my_table LIMIT 1000", conn)
df.head()
After schema discovery, ask these questions conversationally (not all at once):
Entity Disambiguation (Critical)
"When people here say 'user' or 'customer', what exactly do they mean? Are there different types?"
Listen for:
Primary Identifiers
"What's the main identifier for a [customer/user/account]? Are there multiple IDs for the same entity?"
Listen for:
Key Metrics
"What are the 2-3 metrics people ask about most? How is each one calculated?"
Listen for:
Data Hygiene
"What should ALWAYS be filtered out of queries? (test data, fraud, internal users, etc.)"
Listen for:
Common Gotchas
"What mistakes do new analysts typically make with this data?"
Listen for:
Create a skill with this structure:
[company]-data-analyst/
├── SKILL.md
└── references/
├── entities.md # Entity definitions and relationships
├── metrics.md # KPI calculations
├── tables/ # One file per domain
│ ├── [domain1].md
│ └── [domain2].md
└── dashboards.json # Optional: existing dashboards catalog
SKILL.md Template: See references/skill-template.md
SQL Dialect Section: See references/sql-dialects.md and include the appropriate dialect notes.
Reference File Template: See references/domain-template.md
Use when: User has an existing skill but needs to add more context.
Ask user to upload their existing skill (zip or folder), or locate it if already in the session.
Read the current SKILL.md and reference files to understand what's already documented.
Ask: "What domain or topic needs more context? What queries are failing or producing wrong results?"
Common gaps:
For the identified domain:
Explore relevant tables: Use Snowflake schema tools to find tables in that domain
Ask domain-specific questions:
Generate new reference file: Create references/[domain].md using the domain template
Each reference file should include:
Before delivering a generated skill, verify: