Use when working with SQLite databases in DataPeeker analysis sessions - querying data, importing CSVs, exploring schemas, formatting output, or optimizing performance. Provides task-oriented guidance for effective SQLite CLI usage in data analysis workflows.
Provides SQLite CLI guidance for DataPeeker analysis workflows. Use when exploring schemas, querying data, importing CSVs, formatting output, or optimizing performance in SQLite databases.
/plugin marketplace add tilmon-engineering/claude-skills/plugin install datapeeker@tilmon-eng-skillsThis skill inherits all available tools. When active, it can use any tool Claude has access to.
exploring-schema.mdformatting-output.mdimporting-data.mdinvoking-cli.mdoptimizing-performance.mdwriting-queries.mdSQLite is the primary database for DataPeeker analysis sessions. This skill provides task-oriented guidance for common SQLite operations during data analysis.
Core principle: Explore schema first, format output for readability, write efficient queries, verify all operations.
Use this skill when you need to:
When NOT to use:
understanding-data skill for patterns that work across all SQL databases)cleaning-data skill + sub-agents)Database Path: data/analytics.db (relative from project root)
Table Naming: raw_* for imported data, clean_* for cleaned data
Single Database: All tables in one file per analysis session
Example workflow:
CSV file → raw_sales → clean_sales → Analysis queries
| Task | Guidance File |
|---|---|
| Understand what tables/columns exist | @./exploring-schema.md |
| Make query results readable | @./formatting-output.md |
| Write analytical queries | @./writing-queries.md |
| Load CSV files | @./importing-data.md |
| Fix slow queries | @./optimizing-performance.md |
| Choose CLI invocation method | @./invoking-cli.md |
Before writing queries, understand the database structure.
See @./exploring-schema.md for:
When: Starting analysis, unfamiliar database, before writing joins
Make query results readable for analysis.
See @./formatting-output.md for:
When: Query results hard to read, need specific format for export, preparing reports
SQLite-specific query patterns and conventions.
See @./writing-queries.md for:
See also: writing-queries and understanding-data skills for database-agnostic SQL patterns and data profiling approaches. This guidance focuses on SQLite-specific syntax, CLI usage, and optimizations.
When: Need SQLite-specific syntax, DataPeeker query conventions, date formatting with STRFTIME
Load CSV files and verify import success.
See @./importing-data.md for:
When: Loading new data, Phase 4 of importing-data skill, verifying data loaded correctly
Diagnose and fix slow queries.
See @./optimizing-performance.md for:
When: Query takes >1 second, loading large datasets, repeated similar queries
Choose the right method to run sqlite3 commands.
See @./invoking-cli.md for:
When: Starting any SQLite operation, unsure which invocation to use
| Mistake | Fix |
|---|---|
| Writing queries without exploring schema | Always run .tables and .schema first |
| Poor output formatting (hard to read results) | Use .mode column and .headers on for readability |
| Ignoring NULL values in calculations | Use COUNT(*) - COUNT(col) for NULL counting |
| Integer division losing decimals | Use 100.0 (not 100) for percentage calculations |
| Slow queries without diagnosis | Run EXPLAIN QUERY PLAN before optimizing |
| Assuming import succeeded | Always verify with SELECT COUNT(*) after import |
| Using wrong CLI invocation pattern | Interactive for exploration, heredoc for scripts |
After any import operation:
-- 1. Verify row count matches expectation
SELECT COUNT(*) FROM raw_table;
-- 2. Check sample data looks correct
SELECT * FROM raw_table LIMIT 5;
-- 3. Verify no unexpected NULLs
SELECT COUNT(*) - COUNT(critical_column) FROM raw_table;
After writing a complex query:
-- 1. Check query plan
EXPLAIN QUERY PLAN SELECT ...;
-- 2. Time the query
.timer on
SELECT ...;
Systematic approach prevents:
Following this skill:
Use when working with Payload CMS projects (payload.config.ts, collections, fields, hooks, access control, Payload API). Use when debugging validation errors, security issues, relationship queries, transactions, or hook behavior.