From vastai-pack
Optimizes Vast.ai GPU rental costs using cost-per-TFLOP selection, spot instance analysis, Python auto-destroy timers, and Bash idle detection.
npx claudepluginhub jeremylongshore/claude-code-plugins-plus-skills --plugin vastai-packThis skill is limited to using the following tools:
Minimize Vast.ai GPU cloud costs by choosing the right GPU for your workload, leveraging interruptible (spot) instances, eliminating idle compute, and implementing auto-destroy safeguards. Vast.ai pricing is dynamic and varies significantly: RTX 4090 ($0.15-0.30/hr), A100 80GB ($1.00-2.00/hr), H100 SXM ($2.50-4.00/hr).
Orchestrates Vast.ai GPU instances for distributed training with spot preemption recovery and cost optimization across multiple machines.
Guides Next.js Cache Components and Partial Prerendering (PPR): 'use cache' directives, cacheLife(), cacheTag(), revalidateTag() for caching, invalidation, static/dynamic optimization. Auto-activates on cacheComponents: true.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Share bugs, ideas, or general feedback.
Minimize Vast.ai GPU cloud costs by choosing the right GPU for your workload, leveraging interruptible (spot) instances, eliminating idle compute, and implementing auto-destroy safeguards. Vast.ai pricing is dynamic and varies significantly: RTX 4090 ($0.15-0.30/hr), A100 80GB ($1.00-2.00/hr), H100 SXM ($2.50-4.00/hr).
vastai CLI installed# Compare cost-per-TFLOP across GPU types
GPU_SPECS = {
"RTX_4090": {"fp16_tflops": 82.6, "vram": 24},
"A100": {"fp16_tflops": 77.97, "vram": 80},
"H100_SXM": {"fp16_tflops": 267, "vram": 80},
"RTX_3090": {"fp16_tflops": 35.6, "vram": 24},
"A6000": {"fp16_tflops": 38.7, "vram": 48},
}
def cost_per_tflop(gpu_name, dph):
specs = GPU_SPECS.get(gpu_name, {"fp16_tflops": 1})
return dph / specs["fp16_tflops"]
# Often RTX 4090 is the best value for inference
# A100 is best for training large models needing >24GB VRAM
# H100 is best only when wall-clock time justifies 10x price premium
# Interruptible (spot) instances are 30-60% cheaper
vastai search offers 'num_gpus=1 gpu_name=RTX_4090 rentable=true' \
--order dph_total --limit 5
# Compare interruptible vs on-demand pricing
# Use interruptible for: batch inference, checkpointed training
# Use on-demand for: final training epochs, production inference
import time, subprocess, json
def auto_destroy_after(instance_id, max_hours=4):
"""Destroy instance after max_hours to prevent cost overruns."""
max_seconds = max_hours * 3600
time.sleep(max_seconds)
subprocess.run(["vastai", "destroy", "instance", str(instance_id)], check=True)
print(f"Instance {instance_id} auto-destroyed after {max_hours}h")
# Run in background thread when provisioning
import threading
watchdog = threading.Thread(target=auto_destroy_after, args=(inst_id, 4), daemon=True)
watchdog.start()
#!/bin/bash
# Find and destroy idle instances (GPU util < 10% for >10 min)
vastai show instances --raw | python3 -c "
import sys, json
for inst in json.load(sys.stdin):
if inst.get('actual_status') == 'running':
gpu_util = inst.get('gpu_util', 0)
if gpu_util < 10:
print(f'IDLE: Instance {inst[\"id\"]} GPU util={gpu_util}% '
f'(\${inst.get(\"dph_total\", 0):.3f}/hr)')
"
def daily_cost_report():
"""Calculate current daily burn rate from running instances."""
result = subprocess.run(
["vastai", "show", "instances", "--raw"],
capture_output=True, text=True)
instances = json.loads(result.stdout)
total_hourly = 0
for inst in instances:
if inst.get("actual_status") == "running":
dph = inst.get("dph_total", 0)
total_hourly += dph
print(f" {inst['id']}: {inst.get('gpu_name')} ${dph:.3f}/hr")
print(f"\nTotal: ${total_hourly:.3f}/hr = ${total_hourly * 24:.2f}/day")
--order dph_total to find cheapest offers| Error | Cause | Solution |
|---|---|---|
| Unexpected $50+ bill | Forgot to destroy instances | Implement auto-destroy watchdog |
| GPU idle at $2/hr | Waiting for data download | Pre-stage data before provisioning GPU |
| Spot preemption mid-job | Cheapest instance reclaimed | Checkpoint frequently; auto-recover |
For reference architecture, see vastai-reference-architecture.
Budget cap: Set dph_total<=0.25 in search queries and auto_destroy_after(inst_id, 4) to cap any single job at $1.00.
GPU comparison: Run the same workload on RTX 4090 ($0.20/hr) vs A100 ($1.50/hr). If the A100 finishes in less than 1/7th the time, it's cheaper overall.