From market-data
Download historical stock market data (minute and daily OHLCV) from Massive.com (Polygon.io) flat files for any US stock ticker and date range. Use when the user asks for stock price data, historical candles, or market data for any date that is NOT today.
npx claudepluginhub davdunc/davdunc-plugins --plugin market-data<TICKER> <START_DATE> [END_DATE]This skill is limited to using the following tools:
Download and store minute-level and daily OHLCV data from Massive.com (Polygon.io) S3 flat files for any US-listed stock.
Conducts multi-round deep research on GitHub repos via API and web searches, generating markdown reports with executive summaries, timelines, metrics, and Mermaid diagrams.
Share bugs, ideas, or general feedback.
Download and store minute-level and daily OHLCV data from Massive.com (Polygon.io) S3 flat files for any US-listed stock.
aws) installed and available on PATHParse the arguments from: $ARGUMENTS
Expected formats:
TICKER START_DATE — single date (e.g., GSAT 2026-04-01)TICKER START_DATE END_DATE — date range (e.g., GSAT 2026-04-01 2026-04-05)Dates must be in YYYY-MM-DD format. If the user provides a relative date like "last Thursday", calculate the absolute date from today's date.
IMPORTANT: This skill is for historical data only. Do NOT use for current-day data — use the Massive MCP server's call_api tool or the REST API for real-time/today's data instead.
The skill needs two S3 credentials from Massive.com. Resolve them in this order of priority:
MASSIVE_AK="${MASSIVE_S3_ACCESS_KEY:-${MASSIVE_ACCESS_KEY:-}}"
MASSIVE_SK="${MASSIVE_S3_SECRET_KEY:-${MASSIVE_SECRET_ACCESS_KEY:-}}"
This supports both naming conventions:
MASSIVE_S3_ACCESS_KEY / MASSIVE_S3_SECRET_KEY (used by the equities-watchlist MCP server)MASSIVE_ACCESS_KEY / MASSIVE_SECRET_ACCESS_KEY (standalone convention)If the env vars are not set, check for a .env file. Support common formats:
if [[ -z "$MASSIVE_AK" ]]; then
for ENV_FILE in "${DOTENV_PATH:-}" "$PWD/.env" "$HOME/.env"; do
[[ -n "$ENV_FILE" && -f "$ENV_FILE" ]] || continue
MASSIVE_AK=$(grep -E '^\s*(export\s+)?MASSIVE_S3_ACCESS_KEY\s*=' "$ENV_FILE" \
| head -1 | sed 's/^[^=]*=\s*//' | sed 's/^["'\'']//' | sed 's/["'\'']\s*$//')
[[ -z "$MASSIVE_AK" ]] && MASSIVE_AK=$(grep -E '^\s*(export\s+)?MASSIVE_ACCESS_KEY\s*=' "$ENV_FILE" \
| head -1 | sed 's/^[^=]*=\s*//' | sed 's/^["'\'']//' | sed 's/["'\'']\s*$//')
MASSIVE_SK=$(grep -E '^\s*(export\s+)?MASSIVE_S3_SECRET_KEY\s*=' "$ENV_FILE" \
| head -1 | sed 's/^[^=]*=\s*//' | sed 's/^["'\'']//' | sed 's/["'\'']\s*$//')
[[ -z "$MASSIVE_SK" ]] && MASSIVE_SK=$(grep -E '^\s*(export\s+)?MASSIVE_SECRET_ACCESS_KEY\s*=' "$ENV_FILE" \
| head -1 | sed 's/^[^=]*=\s*//' | sed 's/^["'\'']//' | sed 's/["'\'']\s*$//')
[[ -n "$MASSIVE_AK" && -n "$MASSIVE_SK" ]] && break
done
fi
If neither method yields credentials, stop and tell the user:
Massive.com S3 credentials not found. Set them via one of these methods:
Option A — Environment variables:
export MASSIVE_S3_ACCESS_KEY="your-access-key-id" export MASSIVE_S3_SECRET_KEY="your-secret-key"Option B — Dotenv file (
./.env,~/.env, or setDOTENV_PATH):MASSIVE_S3_ACCESS_KEY=your-access-key-id MASSIVE_S3_SECRET_KEY=your-secret-keyGet your S3 credentials from: https://massive.com/dashboard
Resolve endpoint and bucket from environment variables, with defaults:
S3_ENDPOINT="${MASSIVE_S3_ENDPOINT:-https://files.massive.com}"
S3_BUCKET="${MASSIVE_S3_BUCKET:-flatfiles}"
Also check the dotenv file for these if not set in the environment (same resolution logic as credentials above).
| Data Type | Path Pattern |
|---|---|
| Minute aggregates | us_stocks_sip/minute_aggs_v1/{YYYY}/{MM}/{YYYY-MM-DD}.csv.gz |
| Daily aggregates | us_stocks_sip/day_aggs_v1/{YYYY}/{MM}/{YYYY-MM-DD}.csv.gz |
| Trades | us_stocks_sip/trades_v1/{YYYY}/{MM}/{YYYY-MM-DD}.csv.gz |
| Quotes | us_stocks_sip/quotes_v1/{YYYY}/{MM}/{YYYY-MM-DD}.csv.gz |
This skill downloads minute aggregates and daily aggregates by default. Trades and quotes are available but significantly larger — only download those if the user specifically requests tick-level data.
ticker,volume,open,close,high,low,window_start,transactions
window_start is Unix epoch in nanoseconds (divide by 1e9 for seconds)Data is stored under a configurable base directory:
DATA_DIR="${MARKET_DATA_DIR:-$HOME/market_data}"
Users can override by setting the MARKET_DATA_DIR environment variable.
{DATA_DIR}/{TICKER}/{TICKER}_{YYYY-MM-DD}_minute.csv — per-minute OHLCV, all sessions
{DATA_DIR}/{TICKER}/{TICKER}_{YYYY-MM-DD}_daily.csv — daily OHLCV (single row, no header)
Extract TICKER (uppercase), START_DATE, and optional END_DATE from $ARGUMENTS. If only one date is given, set END_DATE = START_DATE.
Generate the list of weekday dates to download. Skip weekends (Saturday/Sunday). Use:
python3 -c "
from datetime import date, timedelta
start = date.fromisoformat('$START_DATE')
end = date.fromisoformat('$END_DATE')
d = start
while d <= end:
if d.weekday() < 5:
print(d.isoformat())
d += timedelta(days=1)
"
mkdir -p "$DATA_DIR/$TICKER"
For each date in the range:
Check if data already exists locally. If $DATA_DIR/$TICKER/${TICKER}_${DATE}_minute.csv already exists and has more than 1 line (header + data), skip that date and report it was cached.
Download the full-market minute file to a temp location:
AWS_ACCESS_KEY_ID="$MASSIVE_AK" AWS_SECRET_ACCESS_KEY="$MASSIVE_SK" \
aws s3 cp "s3://$S3_BUCKET/us_stocks_sip/minute_aggs_v1/${YYYY}/${MM}/${DATE}.csv.gz" \
"/tmp/${DATE}_minute_all.csv.gz" --endpoint-url "$S3_ENDPOINT"
Extract ticker-specific rows (include header):
zcat "/tmp/${DATE}_minute_all.csv.gz" | head -1 > "$DATA_DIR/$TICKER/${TICKER}_${DATE}_minute.csv"
zcat "/tmp/${DATE}_minute_all.csv.gz" | grep "^${TICKER}," >> "$DATA_DIR/$TICKER/${TICKER}_${DATE}_minute.csv"
Download and extract daily aggregate:
AWS_ACCESS_KEY_ID="$MASSIVE_AK" AWS_SECRET_ACCESS_KEY="$MASSIVE_SK" \
aws s3 cp "s3://$S3_BUCKET/us_stocks_sip/day_aggs_v1/${YYYY}/${MM}/${DATE}.csv.gz" \
"/tmp/${DATE}_day_all.csv.gz" --endpoint-url "$S3_ENDPOINT"
zcat "/tmp/${DATE}_day_all.csv.gz" | grep "^${TICKER}," > "$DATA_DIR/$TICKER/${TICKER}_${DATE}_daily.csv"
Clean up temp files:
rm -f "/tmp/${DATE}_minute_all.csv.gz" "/tmp/${DATE}_day_all.csv.gz"
After all dates are processed, report:
Use this Python snippet to convert timestamps to Eastern Time (auto-detects EST/EDT):
from datetime import datetime, timezone, timedelta
try:
from zoneinfo import ZoneInfo
ET = ZoneInfo("America/New_York")
except ImportError:
ET = timezone(timedelta(hours=-4)) # fallback, assumes EDT
dt = datetime.fromtimestamp(int(window_start) / 1e9, tz=timezone.utc).astimezone(ET)
No data available for {DATE} (non-trading day or holiday) and continue.No bars found for {TICKER} on {DATE}. Ticker may not have traded or may be invalid.pip install awscli or via their package manager.