From vastai-pack
Rent your first GPU instance on Vast.ai and run a workload. Use when starting a new Vast.ai integration, testing your setup, or learning basic Vast.ai GPU rental patterns. Trigger with phrases like "vastai hello world", "vastai example", "vastai quick start", "rent first gpu", "vastai first instance".
npx claudepluginhub flight505/skill-forge --plugin vastai-packThis skill is limited to using the following tools:
Rent your first GPU instance on Vast.ai, run a PyTorch workload, and destroy the instance when done. Demonstrates the full lifecycle: search offers, create instance, connect via SSH, run a job, and tear down.
Guides Next.js Cache Components and Partial Prerendering (PPR): 'use cache' directives, cacheLife(), cacheTag(), revalidateTag() for caching, invalidation, static/dynamic optimization. Auto-activates on cacheComponents: true.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Share bugs, ideas, or general feedback.
Rent your first GPU instance on Vast.ai, run a PyTorch workload, and destroy the instance when done. Demonstrates the full lifecycle: search offers, create instance, connect via SSH, run a job, and tear down.
vastai-install-auth setup# Find cheap single-GPU offers sorted by price
vastai search offers 'num_gpus=1 gpu_ram>=8 inet_down>100 reliability>0.95' \
--order 'dph_total' --limit 5
# Output columns: ID, GPU, VRAM, $/hr, DLPerf, Reliability, Location
curl -s -H "Authorization: Bearer $VASTAI_API_KEY" \
"https://cloud.vast.ai/api/v0/bundles/?q=%7B%22num_gpus%22%3A%7B%22eq%22%3A1%7D%2C%22gpu_ram%22%3A%7B%22gte%22%3A8%7D%2C%22reliability2%22%3A%7B%22gte%22%3A0.95%7D%2C%22rentable%22%3A%7B%22eq%22%3Atrue%7D%7D&order=dph_total&limit=5" \
| jq '.offers[:3] | .[] | {id, gpu_name, num_gpus, gpu_ram, dph_total, reliability2}'
# Replace OFFER_ID with the ID from search results
vastai create instance OFFER_ID \
--image pytorch/pytorch:2.2.0-cuda12.1-cudnn8-runtime \
--disk 20 \
--onstart-cmd "echo 'Instance ready'"
from vastai_client import VastClient
client = VastClient()
# Search for affordable RTX 4090 offers
offers = client.search_offers({
"num_gpus": {"eq": 1},
"gpu_name": {"eq": "RTX_4090"},
"reliability2": {"gte": 0.95},
"rentable": {"eq": True},
})
# Pick the cheapest offer
best = sorted(offers["offers"], key=lambda o: o["dph_total"])[0]
print(f"Best offer: {best['gpu_name']} at ${best['dph_total']:.3f}/hr (ID: {best['id']})")
# Create instance with PyTorch image
instance = client.create_instance(
offer_id=best["id"],
image="pytorch/pytorch:2.2.0-cuda12.1-cudnn8-runtime",
disk_gb=20,
onstart="nvidia-smi && python -c 'import torch; print(torch.cuda.is_available())'",
)
print(f"Instance created: {instance}")
# Check instance status (wait for 'running')
vastai show instances --raw | jq '.[] | {id, actual_status, ssh_host, ssh_port}'
# Connect via SSH once running
ssh -p SSH_PORT root@SSH_HOST
# On the instance: verify GPU access
nvidia-smi
python -c "import torch; print(f'CUDA available: {torch.cuda.is_available()}')"
# test_gpu.py — run this ON the rented instance
import torch
import time
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(f"Device: {device} ({torch.cuda.get_device_name(0)})")
# Simple matrix multiplication benchmark
size = 4096
a = torch.randn(size, size, device=device)
b = torch.randn(size, size, device=device)
torch.cuda.synchronize()
start = time.time()
c = torch.matmul(a, b)
torch.cuda.synchronize()
elapsed = time.time() - start
tflops = (2 * size**3) / elapsed / 1e12
print(f"Matrix multiply {size}x{size}: {elapsed:.3f}s ({tflops:.2f} TFLOPS)")
print("Hello World from Vast.ai!")
# IMPORTANT: Destroy to stop billing
vastai destroy instance INSTANCE_ID
# Verify it's gone
vastai show instances
| Error | Cause | Solution |
|---|---|---|
No offers found | Filters too strict | Relax GPU or reliability filters |
Insufficient funds | Account balance too low | Add credits at cloud.vast.ai |
Instance failed to start | Docker image pull failed | Use a smaller or more common image |
SSH connection refused | Instance still loading | Wait 1-2 min for status running |
CUDA not available | Driver mismatch | Use a CUDA-compatible Docker image |
Proceed to vastai-local-dev-loop for development workflow setup.
Cheapest GPU test: Search with vastai search offers 'num_gpus=1' --order 'dph_total' --limit 1, create an instance with the ubuntu image, SSH in, run nvidia-smi, then destroy.
Specific GPU model: Filter for H100 with gpu_name=H100_SXM and reliability>0.99 for production-grade hardware. Expect $2.50-4.00/hr.