From mlld
Generates and executes LLM-written SQL or code to query structured data (databases, CSVs) for complex cross-tabulation, filtering, and slicing, then runs parallel research on results.
npx claudepluginhub mlld-lang/mlld --plugin mlldThis skill uses the workspace's default tool permissions.
The question can't be answered by a single lookup. It requires **cross-tabulation** — slicing and filtering across multiple criteria, potentially spanning structured data (databases, CSVs) and unstructured data (files, articles, logs).
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
The question can't be answered by a single lookup. It requires cross-tabulation — slicing and filtering across multiple criteria, potentially spanning structured data (databases, CSVs) and unstructured data (files, articles, logs).
Examples:
No pre-built tool handles these. An LLM can express them as SQL in ~200 tokens.
For simple parallel processing (same operation on many items), see /mlld:fanout.
Schema → LLM writes query → code executes → parallel deep research → synthesis
Each step has a different cost profile:
| Step | Who | Token cost |
|---|---|---|
| Read schema | code | 0 |
| Write query | LLM (cheap model) | ~500 |
| Execute query | code | 0 |
| Research each result | LLM (parallel) | N × ~800 |
| Synthesize | LLM | ~1500 |
The database does the heavy filtering. The LLM only sees the matches.
LLM writes SQL against a database, code executes, parallel research on results.
var @schema = cmd { sqlite3 app.db ".schema" }
exe @claude(prompt) = cmd { claude -p "@prompt" }
exe @haiku(prompt) = cmd { claude -p "@prompt" --model haiku }
>> LLM writes the query — cheap model, ~200 output tokens
var @q = @haiku(`You have a SQLite database.
<schema>
@schema
</schema>
Write a query for: users with high activity but declining engagement scores
Requirements:
- Return id, name, and any columns useful for further analysis
- LIMIT 25
- Explain your reasoning
Return JSON: { "sql": "SELECT ...", "rationale": "..." }`) | @parse.llm
log `Query: @q.sql`
log `Rationale: @q.rationale`
>> Execute — zero LLM tokens
var @hits = cmd { sqlite3 -json app.db "@q.sql" } | @parse
log `Found @hits.length results`
>> Parallel deep research on each result
var @results = for parallel(4) @row in @hits [
=> @claude(`Analyze this user's behavior and recommend interventions:
<user>@row</user>
JSON: { user_id, risk_factors, recommendation }`) | @parse.llm
]
>> Synthesize
var @report = @claude(`Synthesize these analyses into a prioritized report:
@results
JSON: { summary, priority_actions[], patterns_found[] }`) | @parse.llm
show @report
Combine DB results with file searches, news, logs, or other unstructured data for richer research.
var @schema = cmd { sqlite3 app.db ".schema" }
exe @claude(prompt) = cmd { claude -p "@prompt" }
exe @haiku(prompt) = cmd { claude -p "@prompt" --model haiku }
var @q = @haiku(`<schema>@schema</schema>
Query for: @question
JSON: { sql, rationale }`) | @parse.llm
var @hits = cmd { sqlite3 -json app.db "@q.sql" } | @parse
>> Enrich each result with unstructured context
var @results = for parallel(4) @row in @hits [
let @logs = cmd { grep -l "@row.name" logs/*.log | head -3 }
let @logContent = when @logs [
"" => "No logs found."
* => <@logs>
]
let @docs = cmd { grep -rl "@row.name" docs/ | head -3 }
let @docContent = when @docs [
"" => "No docs found."
* => <@docs>
]
=> @claude(`Analyze based on structured data and context:
<data>@row</data>
<logs>@logContent</logs>
<docs>@docContent</docs>
JSON: { id, assessment, evidence[], recommendation }`) | @parse.llm
]
show @results
When the first query might miss, add a refinement step.
var @schema = cmd { sqlite3 app.db ".schema" }
exe @claude(prompt) = cmd { claude -p "@prompt" }
exe @haiku(prompt) = cmd { claude -p "@prompt" --model haiku }
>> First attempt
var @q1 = @haiku(`<schema>@schema</schema>
Query: @question
JSON: { sql, rationale }`) | @parse.llm
var @hits1 = cmd { sqlite3 -json app.db "@q1.sql" } | @parse
log `First query: @hits1.length results`
>> LLM reviews results and decides if refinement is needed
var @review = @haiku(`You queried: @q1.sql
Got @hits1.length results. Sample: @hits1.slice(0, 3)
Original question: @question
Are these results good, or should the query be refined?
JSON: { "good": true/false, "refined_sql": "..." if not good, "reason": "..." }`) | @parse.llm
var @hits = when @review.good [
true => @hits1
* => cmd { sqlite3 -json app.db "@review.refined_sql" } | @parse
]
log `Final result set: @hits.length rows`
>> Research phase on final results
var @results = for parallel(4) @row in @hits [
=> @claude(`Analyze: @row
JSON: { id, assessment, recommendation }`) | @parse.llm
]
show @results
PRAGMA query_only = ON or a read-only connection.LIMIT in the prompt instructions. Enforce a cap in code if the LLM omits it..schema), not a data dump. Column names and types are enough to write queries.log the generated SQL before executing. The user should see what's running.INSERT, UPDATE, DELETE, DROP) before executing.cmd { sqlite3 -json ... } returns JSON text — pipe through | @parse to get objects.@parse.llm for LLM responses (handles markdown fences). @parse for clean JSON.sh { } instead of cmd { } if your SQL contains characters that cmd rejects (>, &&, ;)."@q.sql" in cmd blocks. If the SQL itself contains double quotes, write it to a temp file and use sqlite3 app.db < tmp/query.sql.@row.user_id works, but @row.some.nested.field chains field access. Use js { } for complex field access.@haiku for query generation (cheap, fast, good enough for SQL). Use @claude (sonnet) for research and synthesis.