npx claudepluginhub sjungling/sjungling-claude-pluginsThis skill uses the workspace's default tool permissions.
Before writing any data analysis code, evaluate:
Guides sqlite3 CLI usage to build composable SQLite knowledge databases, design schemas, query data, manage relationships, and output for agent parsing.
Generates and executes LLM-written SQL or code to query structured data (databases, CSVs) for complex cross-tabulation, filtering, and slicing, then runs parallel research on results.
Generates optimized SQL/NoSQL queries from natural language for PostgreSQL, MySQL, MongoDB, SQLite, Redis. Includes schema discovery, query explanations, index suggestions, and visual results.
Share bugs, ideas, or general feedback.
Before writing any data analysis code, evaluate:
If the answer to any question above is YES, use SQLite. Do not write custom parsing code.
# Custom code for every query:
cat data.json | jq '.[] | select(.status=="failed")' | jq -r '.error_type' | sort | uniq -c
# SQL does the work:
sqlite3 data.db "SELECT error_type, COUNT(*) FROM errors WHERE status='failed' GROUP BY error_type"
SQLite is just a file -- no server, no setup, zero dependencies. Apply it when custom parsing code would otherwise be written or data would be re-processed for each query.
Apply when ANY of these conditions hold:
Skip SQLite when ALL of these are true:
For tiny datasets with simple access, JSON/grep is fine.
STOP and use SQLite when about to:
All of these mean: Load into SQLite once, query with SQL.
| Scenario | Tool | Why |
|---|---|---|
| 50 test results, one-time summary | Python/jq | Fast, appropriate |
| 200+ test results, find flaky tests | SQLite | GROUP BY simpler than code |
| 3 log files, correlate by time | SQLite | JOIN simpler than manual grep |
| Track 1000+ file processing state | SQLite | Queries beat JSON parsing |
Rule of thumb: If parsing code is being written or data is being re-processed, use SQLite instead.
sqlite3 ~/.claude-logs/project.db
Check availability before use:
command -v sqlite-utils >/dev/null 2>&1 && echo "available" || echo "not installed"
If sqlite-utils is available:
sqlite-utils insert data.db table_name data.json --pk=id
sqlite-utils query data.db "SELECT * FROM table"
If sqlite-utils is NOT available, fall back to sqlite3 with manual import:
sqlite3 data.db <<EOF
CREATE TABLE IF NOT EXISTS results (status TEXT, error_message TEXT);
.mode json
.import data.json results
EOF
Alternatively, install sqlite-utils: uv tool install sqlite-utils
~/.claude-logs/<project-name>.db # Persists across sessions
# 1. Connect
PROJECT=$(basename $(git rev-parse --show-toplevel 2>/dev/null || pwd))
sqlite3 ~/.claude-logs/$PROJECT.db
# 2. Create table (first time)
CREATE TABLE results (
id INTEGER PRIMARY KEY,
name TEXT,
status TEXT,
duration_ms INTEGER
);
# 3. Load data
INSERT INTO results (name, status, duration_ms)
SELECT json_extract(value, '$.name'),
json_extract(value, '$.status'),
json_extract(value, '$.duration_ms')
FROM json_each(readfile('data.json'));
# 4. Query (SQL does the work)
SELECT status, COUNT(*), AVG(duration_ms)
FROM results
GROUP BY status;
From: "Process data once and done" To: "Make data queryable"
Benefits:
./references/patterns.md -- Python vs SQL side-by-side comparison and real-world examples (test analysis, error correlation, file processing state)