npx claudepluginhub big-time-data/homebrew-dts --plugin dtsThis skill uses the workspace's default tool permissions.
This skill provides a structured workflow for executing and interpreting database queries with full LLM observability. All reasoning, queries, and interpretations are tracked via the DTS observer system for later review and human annotation.
Provides Ktor server patterns for routing DSL, plugins (auth, CORS, serialization), Koin DI, WebSockets, services, and testApplication testing.
Conducts multi-source web research with firecrawl and exa MCPs: searches, scrapes pages, synthesizes cited reports. For deep dives, competitive analysis, tech evaluations, or due diligence.
Provides demand forecasting, safety stock optimization, replenishment planning, and promotional lift estimation for multi-location retailers managing 300-800 SKUs.
This skill provides a structured workflow for executing and interpreting database queries with full LLM observability. All reasoning, queries, and interpretations are tracked via the DTS observer system for later review and human annotation.
When exploring data or answering questions about dbt models:
observer_contextdb_exec including why you're running themdb_interpret (REQUIRED)observer_get_annotationsThe dts-prime command (auto-runs at session start) establishes schema context from dbt configuration. This context is critical for writing correct queries.
After dts-prime runs, you have access to:
| Variable | Description | Example |
|---|---|---|
dev_schema | Your development database.schema | ANALYTICS_DEV.DEV_ADOVEN |
prod_schemas | Production database.schema(s) | ANALYTICS.MARTS, ANALYTICS.FINANCE |
Dev Only (testing your changes):
SELECT * FROM ANALYTICS_DEV.DEV_ADOVEN.orders LIMIT 100
Prod Only (production analysis):
SELECT * FROM ANALYTICS.MARTS.orders LIMIT 100
Cross-Environment (comparing dev to prod):
SELECT 'dev' as env, COUNT(*) as row_count
FROM ANALYTICS_DEV.DEV_ADOVEN.orders
UNION ALL
SELECT 'prod' as env, COUNT(*) as row_count
FROM ANALYTICS.MARTS.orders
Do this:
SELECT * FROM ANALYTICS.MARTS.orders
Not this:
SELECT * FROM orders -- Relies on session context, will fail
See references/ENVIRONMENTS.md for detailed patterns.
When working on a tracked issue (GitHub, Linear, Jira, etc.), include the issue reference for traceability.
At workflow start, check for issue references in the conversation:
data-tools-xxx, PROJ-123, #456, or explicit issue mentionsissue_ref - it's optionalWhen an issue is active, include issue_ref in observer calls:
db_exec(
sql: "SELECT ...",
intent: "...",
issue_ref: "data-tools-yf9" # Link query to issue
)
observer_context(
context_type: "reasoning",
content: "...",
issue_ref: "data-tools-yf9" # Link reasoning to issue
)
When returning results, include the issue reference:
## Query Workflow Summary
**Issue**: `data-tools-yf9` - Update SKILL.md with env, issue, activation triggers
**Session ID**: abc-123
...
This workflow should be used proactively when data exploration is needed. Don't wait for explicit /query commands.
When you recognize these patterns, proactively engage:
"I'll investigate this using the query workflow. Let me first understand the data model..."
Then proceed with the phases: Context Loading → Reasoning → Query → Interpret → Iterate → Return.
/query commandIf the user's question involves data, start the workflow.
The DTS MCP server provides these tools for the workflow:
| Tool | Purpose |
|---|---|
| (auto-init) | Database connection auto-initializes on first query |
dbt_list | List dbt nodes with selection criteria |
dbt_lineage | Get upstream/downstream lineage for a model |
db_get_schemata | Get schema, table, and column information |
| Tool | Purpose |
|---|---|
db_exec | Execute SQL with intent tracking |
db_interpret | Record interpretation of query results |
observer_context | Record reasoning, plans, or observations |
observer_get_annotations | Retrieve human feedback |
Before querying, gather context about the data environment:
# Database connection auto-initializes on first query - no explicit load needed!
# Get lineage for relevant models
dbt_lineage(node: "model_name", direction: "both")
# Get schema details
db_get_schemata(level: "columns", schema: "schema_name", table: "table_name")
# Check for pending annotations from previous sessions
observer_get_annotations()
Before generating SQL, document your reasoning:
observer_context(
context_type: "reasoning",
content: "To understand why orders decreased, I need to:
1. Compare this week to last week
2. Check by product category
3. Look for any data quality issues",
tags: ["domain:orders", "analysis:trend"]
)
Context Types:
reasoning - Your thought process and decision logicplan - Intended sequence of actionsobservation - Insights or findings without immediate actionExecute queries with full context tracking:
db_exec(
sql: "SELECT date, COUNT(*) as order_count FROM orders WHERE date >= '2024-01-01' GROUP BY date",
intent: "Get daily order counts to identify when the decrease started",
tags: ["domain:orders", "analysis:trend"]
)
Returns:
{
"query_id": "abc-123-def",
"columns": ["date", "order_count"],
"rows": [...],
"status": "success",
"_hint": "Record your analysis with db_interpret(query_id: \"abc-123-def\")"
}
The query_id links all subsequent operations (interpretation, annotations) to this query.
IMPORTANT: After analyzing query results, you MUST call
db_interpretto record your interpretation. This is required for observability - skipping this step means your analysis cannot be reviewed or annotated by humans.
After receiving results, record your interpretation:
db_interpret(
query_id: "abc-123-def",
interpretation: "Orders dropped 23% starting January 15th, coinciding with the website redesign launch. The decrease is concentrated in mobile orders.",
visualization_hints: [
{
"chart_type": "line",
"x_column": "date",
"y_column": "order_count",
"title": "Daily Order Trend"
}
],
tags: ["orders:trend", "viz:line"],
issue_ref: "data-tools-yf9" # Include if working on a tracked issue (GitHub, Linear, etc.)
)
Visualization Hint Types:
line - Time series, trendsbar - Comparisons, categoriespie - Proportions, distributionstable - Detailed breakdownsTag Guidelines:
Use namespaced tags to keep interpretations organized and searchable. Limit to 2-3 tags per interpretation.
| Prefix | Purpose | Examples |
|---|---|---|
domain: | Data domain being analyzed | domain:orders, domain:users, domain:billing |
analysis: | Type of analysis | analysis:trend, analysis:comparison, analysis:anomaly |
viz: | Visualization type | viz:line, viz:bar, viz:table |
quality: | Data quality focus | quality:nulls, quality:duplicates |
period: | Time period focus | period:weekly, period:nov-2024 |
Good tags:
domain:teamwork, analysis:trend, period:nov-2024domain:orders, analysis:comparison, viz:barAvoid:
november-analysis, feature-breakdown, engagement-analysisdashboard-requirements, weekly-engagementDecide whether to continue investigating or return results:
Continue when:
Stop when:
Document your decision:
observer_context(
context_type: "observation",
content: "Initial analysis shows mobile orders down 23%. Need to investigate:
1. Mobile vs desktop breakdown
2. Whether specific pages are affected
Proceeding with follow-up queries.",
tags: ["domain:orders", "analysis:comparison"]
)
When returning results, provide a structured summary:
## Query Workflow Summary
### Session Context
- Session ID: [from observer]
- Queries Executed: 3
- Models Analyzed: orders, customers, products
### Key Findings
- Orders dropped 23% starting January 15th
- Mobile orders most affected (-31%)
- Desktop orders relatively stable (-8%)
### Data Quality Notes
- 47 orders missing customer_id (0.3% of total)
### Suggested Follow-ups
- Investigate mobile checkout conversion rates
- Check for correlating support tickets
### Pending Actions
- [ ] Review findings with product team
- [ ] Check mobile analytics data
Human annotations (comments) can arrive at any time. Check for them:
observer_get_annotations(pending_only: true)
Read the comment content to determine the appropriate response:
Example handling:
# Check for annotations
annotations = observer_get_annotations(pending_only: true)
for annotation in annotations:
observer_context(
context_type: "reasoning",
content: f"Received feedback: {annotation.content}. Evaluating response.",
tags: ["feedback-received"]
)
# Determine response based on comment content
observer_context type "observation"For detailed guidance on specific topics: