From Claude-Data-Wrangler
Load a flat dataset (CSV / Parquet / JSON / Excel) into a SQL database. Either uses an existing configured database connection or walks the user through configuring a new one (PostgreSQL, MySQL, SQLite, MSSQL, DuckDB). Creates the table if absent, validates schema, handles primary keys and indexes, and loads with chunked inserts for large files.
npx claudepluginhub danielrosehill/claude-code-plugins --plugin Claude-Data-WranglerThis skill uses the workspace's default tool permissions.
Load a flat-file dataset into a SQL database.
Conducts multi-round deep research on GitHub repos via API and web searches, generating markdown reports with executive summaries, timelines, metrics, and Mermaid diagrams.
Share bugs, ideas, or general feedback.
Load a flat-file dataset into a SQL database.
psycopg2 or psycopg[binary].PyMySQL or mysql-connector-python.pyodbc.$CLAUDE_USER_DATA/Claude-Data-Wrangler/config.json for saved database profiles. List them, let the user pick.op-vault, or prompt at connection time). Never hard-code passwords in files. Save the non-secret parts of the profile to $CLAUDE_USER_DATA/Claude-Data-Wrangler/config.json if the user wants to reuse it.fail (default), append, replace, upsert (requires primary key).int64 → BIGINT / INTEGER.float64 → DOUBLE PRECISION / REAL.object (string) → TEXT / VARCHAR(n) — size from max length observed, padded.datetime64[ns] → TIMESTAMP.bool → BOOLEAN / BIT.
Ask user to confirm for columns where inference is uncertain.CREATE TABLE IF NOT EXISTS ... (or CREATE OR REPLACE if requested). Respect primary key / not-null / index requests.to_sql(..., method='multi') or backend-native bulk loaders (COPY for Postgres, LOAD DATA INFILE for MySQL) for large files.SELECT COUNT(*) matches source; sample a few rows and compare.$CLAUDE_USER_DATA/Claude-Data-Wrangler/config.json:
{
"sql_profiles": {
"local-postgres": {
"backend": "postgresql",
"host": "localhost",
"port": 5432,
"database": "analytics",
"user": "daniel",
"password_ref": {"type": "env", "name": "PGPASSWORD"}
},
"local-sqlite": {
"backend": "sqlite",
"path": "~/Documents/data/warehouse.db"
}
}
}
password_ref options:
{"type": "env", "name": "PGPASSWORD"} — read from env var at connect time.{"type": "op", "reference": "op://Private/postgres/password"} — fetch via 1Password CLI.{"type": "prompt"} — prompt the user each run.Never write plaintext passwords into this file.
pip install pandas sqlalchemy
# per backend
pip install psycopg[binary] # postgres
pip install pymysql # mysql
pip install pyodbc # mssql
pip install duckdb # duckdb
COPY FROM, LOAD DATA) rather than INSERT. Stream from Parquet directly where possible.pii-flag), warn before loading into a shared database.