From glincker-claude-code-marketplace
Generates optimized SQL/NoSQL queries from natural language for PostgreSQL, MySQL, MongoDB, SQLite, Redis. Includes schema discovery, query explanations, index suggestions, and visual results.
npx claudepluginhub joshuarweaver/cascade-code-general-misc-4 --plugin glincker-claude-code-marketplaceThis skill is limited to using the following tools:
**⚡ UNIQUE FEATURE**: Query any database using natural language - automatically generates optimized SQL/NoSQL queries, explains query plans, suggests indexes, and visualizes results. Supports PostgreSQL, MySQL, MongoDB, SQLite, and more.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
⚡ UNIQUE FEATURE: Query any database using natural language - automatically generates optimized SQL/NoSQL queries, explains query plans, suggests indexes, and visualizes results. Supports PostgreSQL, MySQL, MongoDB, SQLite, and more.
Transform natural language into optimized database queries:
SELECT * FROM users WHERE created_at >= NOW() - INTERVAL '1 month'First Claude Code skill that:
Identify Database:
Ask user:
- Database type (PostgreSQL, MySQL, MongoDB, SQLite, etc.)
- Connection method (local, remote, Docker, MCP server)
- Connection string or credentials
Test Connection:
# PostgreSQL
psql -h localhost -U user -d database -c "SELECT version();"
# MySQL
mysql -h localhost -u user -p database -e "SELECT VERSION();"
# MongoDB
mongosh "mongodb://localhost:27017/database" --eval "db.version()"
# SQLite
sqlite3 database.db "SELECT sqlite_version();"
Discover Schema:
# PostgreSQL: Get all tables and columns
psql -d database -c "\dt"
psql -d database -c "\d+ table_name"
# MySQL: Show database structure
mysql database -e "SHOW TABLES;"
mysql database -e "DESCRIBE table_name;"
# MongoDB: List collections and sample documents
mongosh database --eval "db.getCollectionNames()"
mongosh database --eval "db.collection.findOne()"
Build Schema Cache:
When user makes a request:
Parse Intent:
Analyze the request:
- Action: SELECT, INSERT, UPDATE, DELETE, aggregation
- Entities: Which tables/collections
- Conditions: WHERE clauses
- Aggregations: COUNT, SUM, AVG, GROUP BY
- Sorting: ORDER BY
- Limits: TOP N, pagination
Generate Query:
Example 1: "Show me all active users"
-- PostgreSQL/MySQL
SELECT * FROM users WHERE status = 'active';
Example 2: "Count orders by status for last 7 days"
SELECT status, COUNT(*) as count
FROM orders
WHERE created_at >= NOW() - INTERVAL '7 days'
GROUP BY status
ORDER BY count DESC;
Example 3: "Find top 10 customers by revenue"
SELECT
c.name,
c.email,
SUM(o.total) as revenue
FROM customers c
JOIN orders o ON c.id = o.customer_id
GROUP BY c.id, c.name, c.email
ORDER BY revenue DESC
LIMIT 10;
Example 4: MongoDB aggregation
db.orders.aggregate([
{ $match: { status: "completed" } },
{ $group: {
_id: "$customer_id",
total: { $sum: "$amount" }
}},
{ $sort: { total: -1 } },
{ $limit: 10 }
])
Validate Query:
Before execution:
Analyze Query Plan:
-- PostgreSQL
EXPLAIN ANALYZE
SELECT * FROM users WHERE email LIKE '%@example.com';
Suggest Optimizations:
If sequential scan detected:
- "This query is scanning all rows. Consider adding an index:"
- CREATE INDEX idx_users_email ON users(email);
If N+1 query pattern:
- "Use JOIN instead of multiple queries"
- Show optimized version
If missing WHERE clause:
- "This will return all rows. Add filters or LIMIT?"
Rewrite for Performance:
-- Before (slow)
SELECT * FROM users WHERE LOWER(email) = 'user@example.com';
-- After (fast - uses index)
SELECT * FROM users WHERE email = 'user@example.com';
Determine Query Type:
Confirmation for Writes:
⚠️ This query will modify data:
UPDATE users SET status = 'inactive'
WHERE last_login < '2024-01-01'
Estimated affected rows: 1,247
Proceed? [yes/no]
Transaction Support:
BEGIN;
-- Execute query
-- Show results
-- Ask: COMMIT or ROLLBACK?
Table Format (default):
┌────┬─────────────┬──────────────────────┬──────────┐
│ id │ name │ email │ status │
├────┼─────────────┼──────────────────────┼──────────┤
│ 1 │ John Doe │ john@example.com │ active │
│ 2 │ Jane Smith │ jane@example.com │ active │
└────┴─────────────┴──────────────────────┴──────────┘
2 rows returned in 0.023s
Chart Format (for aggregations):
Orders by Status:
pending ████████████░░░░░░░░ 62
completed ████████████████████ 128
cancelled ████░░░░░░░░░░░░░░░░ 15
JSON Format (for APIs):
{
"query": "SELECT * FROM users LIMIT 2",
"execution_time": "0.023s",
"row_count": 2,
"results": [
{"id": 1, "name": "John Doe", ...},
{"id": 2, "name": "Jane Smith", ...}
]
}
Export Options:
User: "Show me recent users"
Skill:
SELECT * FROM users
WHERE created_at >= NOW() - INTERVAL '7 days'
ORDER BY created_at DESC;
User: "Which products had the most revenue last quarter?"
Skill:
SELECT
p.id,
p.name,
SUM(oi.quantity * oi.price) as revenue,
COUNT(DISTINCT o.id) as order_count
FROM products p
JOIN order_items oi ON p.id = oi.product_id
JOIN orders o ON oi.order_id = o.id
WHERE o.created_at >= DATE_TRUNC('quarter', NOW() - INTERVAL '3 months')
AND o.created_at < DATE_TRUNC('quarter', NOW())
AND o.status = 'completed'
GROUP BY p.id, p.name
ORDER BY revenue DESC
LIMIT 10;
User: "Why is this query slow?"
SELECT * FROM orders WHERE customer_name LIKE 'John%';
Skill:
⚠️ Performance Issue Detected:
Problem: Full table scan (10,485,234 rows)
Solution: Add an index on customer_name
CREATE INDEX idx_orders_customer_name ON orders(customer_name);
Expected improvement: 10,485,234 rows → ~42 rows
Estimated speed-up: 10,000x faster
Would you like me to create this index?
Create .database-query-config.yml:
databases:
- name: production
type: postgresql
host: localhost
port: 5432
database: myapp
user: readonly_user
ssl: true
read_only: true
- name: analytics
type: mongodb
uri: mongodb://localhost:27017/analytics
- name: cache
type: redis
host: localhost
port: 6379
defaults:
max_rows: 1000
timeout: 30s
explain_threshold: 1s # Auto-explain queries slower than 1s
auto_optimize: true
safety:
require_confirmation_for_writes: true
prevent_drop_table: true
max_affected_rows: 10000
Connect to MCP database servers:
# Using PostgreSQL MCP server
mcp_servers:
- name: postgres
command: postgres-mcp
args:
- --connection-string
- postgresql://user:pass@localhost/db
# Save favorite queries
claude db save "monthly_revenue" "SELECT..."
# Run saved query
claude db run monthly_revenue
-- Template: user_search
SELECT * FROM users
WHERE {{field}} = {{value}}
AND status = 'active';
# Generate migration between databases
claude db migrate --from postgres://... --to mysql://...
# Compare two databases
claude db diff production staging
Help expand database support:
Apache License 2.0 - See LICENSE
GLINCKER Team
🌟 The most advanced natural language database query skill available!