From workflows
This skill should be used when the user asks to 'set up data analysis for our database', 'extract tribal knowledge about dataset', 'generate data skill', 'document this dataset', 'what does this column mean', 'create data dictionary', 'help me understand this data schema', 'capture domain knowledge about our data', or needs to create a reusable data context skill from dataset expertise.
npx claudepluginhub edwinhu/workflows --plugin workflowsThis skill uses the workspace's default tool permissions.
Extract tribal knowledge about a dataset or database and generate a reusable data context skill.
Implements Playwright E2E testing patterns: Page Object Model, test organization, configuration, reporters, artifacts, and CI/CD integration for stable suites.
Guides Next.js 16+ Turbopack for faster dev via incremental bundling, FS caching, and HMR; covers webpack comparison, bundle analysis, and production builds.
Discovers and evaluates Laravel packages via LaraPlugins.io MCP. Searches by keyword/feature, filters by health score, Laravel/PHP compatibility; fetches details, metrics, and version history.
Extract tribal knowledge about a dataset or database and generate a reusable data context skill.
## The Iron Law of Data ContextYOU MUST interview the user before generating ANY skill content. This is not negotiable.
You MUST NOT:
If you're about to write a skill based on your assumptions, STOP. Interview first.
Trigger: No existing data context skill for this project/dataset.
Create a new data context skill from scratch by interviewing the user about their data.
Trigger: Existing data context skill exists, user wants to add a new domain or update.
Read existing skill, identify gaps, interview for the new domain, merge into existing skill.
Before interviewing, check for existing data access skills that already encode tribal knowledge:
1. READ existing skills: /wrds, /lseg-data, or any project-local data skills
2. IDENTIFY what they already cover (table names, filters, field mappings, gotchas)
3. DO NOT re-document what existing skills handle
4. FOCUS the interview on the project-specific layer:
- Which specific tables/fields from WRDS or LSEG does THIS study use?
- How are identifiers linked across sources? (permno ↔ gvkey, RIC ↔ ISIN ↔ cusip)
- What sample filters define the study universe? (date range, exchange, firm type)
- What derived variables or transformations are project-specific?
The generated data context skill should reference existing skills rather than duplicate them:
## Data Sources
| Source | Skill | Tables/Fields Used |
|--------|-------|--------------------|
| WRDS | `/wrds` | comp.funda (at, lt, ceq), crsp.msf (ret, prc) |
| LSEG | `/lseg-data` | TR.F.TotRevenue, TR.GICSSector |
| Local | DuckDB | data/processed/merged_panel.parquet |
For connection details and critical filters, see the referenced skills.
1. DISCOVER data sources
→ What databases/files/APIs? Connection details?
→ For each: what dialect? (PostgreSQL, DuckDB, SQLite, Snowflake, etc.)
→ IMPORTANT: If user mentions WRDS or LSEG, read the corresponding skill first
and ask only about project-specific usage, not general access patterns
2. MAP entities
→ What are the core entities? (users, transactions, products, etc.)
→ How do they relate? (foreign keys, join paths)
→ CRITICAL: Disambiguate entity names
- "user" vs "account" vs "customer" — are these the same?
- "order" vs "transaction" vs "purchase" — clarify overlaps
- Document the canonical name and any aliases
3. DEFINE key metrics
→ What are the business-critical metrics?
→ For each metric:
- Exact definition (SQL or formula)
- Known edge cases
- Common misinterpretations
- Time grain (daily, monthly, etc.)
4. DOCUMENT data hygiene
→ Known data quality issues
→ Fields that lie (e.g., "created_at" that's actually "imported_at")
→ Nulls that mean something specific
→ Enums/codes that need translation
→ Date ranges with reliable data vs backfill periods
5. CAPTURE common gotchas
→ Joins that explode (many-to-many lurking as one-to-many)
→ Filters that are always needed (e.g., "WHERE is_deleted = false")
→ Time zones and their traps
→ Slowly changing dimensions
→ Tables that look useful but aren't (deprecated, partial, test data)
6. COLLECT common query patterns
→ Frequently needed aggregations
→ Standard date filters or cohort definitions
→ Boilerplate CTEs that everyone copies
Ask questions in batches of 3-5. Don't overwhelm with everything at once.
Round 1: Data Sources
Round 2: Core Entities (after Round 1 answers)
Round 3: Metrics & Definitions (after Round 2 answers)
Round 4: Data Quality & Gotchas (after Round 3 answers)
Round 5: Common Patterns (after Round 4 answers)
After the interview, generate a skill with this structure:
project-name/
├── .claude/
│ └── skills/
│ └── data-context/
│ ├── SKILL.md # Main skill file
│ └── references/
│ ├── entities.md # Entity definitions and relationships
│ ├── metrics.md # Metric definitions with SQL/formulas
│ └── gotchas.md # Data quality issues and common pitfalls
---
name: [project]-data-context
description: "Data context for [project]. Entity definitions, metric calculations, data quality notes, and common patterns for [data domain]."
---
# [Project] Data Context
## Data Sources
| Source | Skill/Dialect | Tables/Fields Used |
|--------|---------------|--------------------|
| [WRDS] | `/wrds` | [specific tables and fields for this project] |
| [LSEG] | `/lseg-data` | [specific fields for this project] |
| [Local] | [DuckDB/CSV/Parquet] | [file paths or database] |
For connection details and critical filters, see the referenced skill. This context covers only project-specific usage.
## Entity Map
[Entity relationship summary — which entities exist, how they connect]
See `references/entities.md` for full definitions.
## Key Metrics
[Top 3-5 metrics with brief definitions]
See `references/metrics.md` for exact calculations and edge cases.
## Critical Gotchas
[Top 3-5 gotchas that catch analysts]
See `references/gotchas.md` for full list.
## Common Patterns
[Frequently used query snippets or data access patterns]
For each reference file, include:
Before writing skill files, execute this gate:
Skipping this gate produces a skill based on your assumptions, not the user's knowledge. That skill will mislead every future analysis.
Before finalizing the skill:
When adding to an existing data context skill:
.claude/skills/data-context/ in the project rootds-delegate will have access to it automatically| Excuse | Reality | Do Instead |
|---|---|---|
| "I can infer the metrics from column names" | Column names are labels, not business logic. revenue could be gross, net, or recognized. | Ask the user for the exact definition and edge cases |
| "The schema tells me everything I need" | Schema captures structure, not semantics. It can't tell you which fields lie or which joins explode. | Interview the user — schema is the starting point, not the answer |
| "I'll fill in the gotchas section later" | Gotchas are the most valuable part. Later means never — you'll ship a skill that misleads every analysis. | Capture gotchas during the interview while context is fresh |
| "The entity relationships are obvious from foreign keys" | Foreign keys show connections, not business rules. A user_id FK doesn't tell you users can have multiple active accounts. | Verify every relationship with the user, especially cardinality |
| "I already know this domain well enough" | Your training data is not this user's data. Their dataset has specific quirks you cannot guess. | Interview anyway — it takes 10 minutes and prevents weeks of wrong analysis |
Skipping the domain expert interview is NOT HELPFUL — every downstream analysis inherits your wrong assumptions about the data. Pattern-matching from column names is not domain understanding.
| Shortcut | Consequence |
|---|---|
| Skipping the interview | You skip the interview to save time. The generated skill has wrong assumptions — every analysis using it produces wrong results. Your shortcut corrupted the entire pipeline. |
| Generating from schema alone | You inferred metric definitions from column names. Three teams used your skill. All three published wrong numbers. Your inference was misinformation. |
| Thought | Reality |
|---|---|
| "I can infer this metric from the schema" | Schema doesn't capture business logic. Ask the user. |
| "This entity relationship is obvious" | Obvious relationships hide gotchas. Verify with user. |
| "I'll fill in the gotchas later" | Gotchas are the most valuable part. Capture them now. |
| "The user said 'standard SQL'" | There's no such thing as standard SQL in practice. Get the dialect. |
| "I'll skip the read-back, it looks right" | Your version ≠ their mental model. Always read back. |
| "I should document how to connect to WRDS" | /wrds already covers that. Document project-specific tables and filters only. |
| "Let me add the LSEG field prefixes" | /lseg-data already covers that. Document which fields THIS project uses. |