Runs chunking benchmarks on Zarr datasets from local/S3/GCS paths or generates synthetic data, tests configurations across access patterns, and produces a performance report with recommendations.
npx claudepluginhub uw-ssec/rse-plugins --plugin zarr-chunk-optimization# /benchmark - Comprehensive Chunking Benchmarking Run complete benchmarking workflow: sample or generate data, test candidate chunk configurations, measure performance across access patterns, generate report with recommendations based on Nguyen et al. (2023) methodology. ## Usage ## What This Command Does 1. Reads dataset metadata or generates synthetic data 2. Asks about access patterns if not specified (use **access-pattern-profiler** skill if user is unsure) 3. Generates candidate chunk configurations varying one dimension at a time 4. Samples data to manageable size for benchmark...
/benchmarkBenchmarks API endpoints via interactive config: warmup requests, concurrent load tests measuring response times, throughput, errors, sizes. Outputs stats summary, JSON results, pass/fail status.
/benchmarkRuns benchmarks on code functions, modules, APIs, or implementations using runtime tools like vitest or pytest-benchmark. Outputs performance tables with ops/sec, latencies, memory, comparisons, and confidence intervals.
/benchmarkRuns competitive benchmarking analysis on specified products or market category, producing a report with profiles, comparison matrix, journey maps, opportunities, and recommendations.
/benchmarkRuns benchmarks on AI inference workloads using sparkrun recipes. Launches server (unless --skip-run), runs benchmark, stops server (unless --no-stop), saves results to YAML, JSON, CSV.
/benchmarkBenchmarks and compares technology or pattern alternatives empirically via performance tests, feature analysis, resource metrics, and trade-off evaluations.
/benchmarkBenchmarks performance, accessibility, and SEO of web targets like URLs or components. Produces visual reports, Core Web Vitals analysis, comparisons, and optimization recommendations.
Run complete benchmarking workflow: sample or generate data, test candidate chunk configurations, measure performance across access patterns, generate report with recommendations based on Nguyen et al. (2023) methodology.
# Benchmark existing dataset
/benchmark s3://bucket/data.zarr
# Benchmark with specific access patterns
/benchmark /data/local.zarr --patterns spatial,time-series
# Benchmark with candidate configurations
/benchmark s3://bucket/data.zarr --configs "10,256,256" "50,512,512" "100,1024,1024"
# Benchmark with memory budget constraint
/benchmark /data/climate.zarr --memory-budget 8GB
# Generate synthetic data and benchmark
/benchmark --synthetic --shape 1000,2048,2048 --dims time,lat,lon
.agents/benchmark-report-[name]-[timestamp].md--synthetic flag used)Required: Either dataset path OR --synthetic flag
Dataset path formats:
/data/mydata.zarrs3://bucket/path/to/data.zarrgs://bucket/path/to/data.zarrAccess patterns: User describes workflow; agent translates to spatial/time-series/spectral patterns
--patterns PATTERNS — Comma-separated list (spatial, time-series, spectral, all). Default: all
--configs CONFIGS — Space-separated chunk shapes in format "t,f,b". Auto-generated if not provided.
--memory-budget SIZE — Maximum acceptable peak memory (e.g., "8GB"). Flags configs exceeding limit.
--runs N — Runs per configuration per pattern. Minimum 5, default 10.
--synthetic — Generate synthetic test data (requires --shape and --dims)
--shape SHAPE — Shape for synthetic data (comma-separated integers)
--dims DIMS — Dimension names for synthetic data (comma-separated strings)
--sample-size SIZE — Elements to sample from first dimension for very large datasets
purge, Linux: drop_caches).agents/ directory with timestamp/tradeoffs to explore, use /rechunk to apply