npx claudepluginhub jeremylongshore/claude-code-plugins-plus-skills --plugin vastai-packThis skill is limited to using the following tools:
Manage training data and model artifacts securely on Vast.ai GPU instances. Covers data transfer, encryption, checkpoint management, and cleanup. Critical consideration: Vast.ai instances run on shared hardware operated by third-party hosts.
Deploys ML training jobs and inference services to Vast.ai GPU cloud using optimized Docker images, CLI scripting, and automation for GPU instance provisioning.
Guides Next.js Cache Components and Partial Prerendering (PPR): 'use cache' directives, cacheLife(), cacheTag(), revalidateTag() for caching, invalidation, static/dynamic optimization. Auto-activates on cacheComponents: true.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Share bugs, ideas, or general feedback.
Manage training data and model artifacts securely on Vast.ai GPU instances. Covers data transfer, encryption, checkpoint management, and cleanup. Critical consideration: Vast.ai instances run on shared hardware operated by third-party hosts.
# Small datasets (<5GB): Direct SCP
scp -P $PORT -r ./data/ root@$HOST:/workspace/data/
# Large datasets (5-50GB): Compressed transfer
tar czf - ./data/ | ssh -p $PORT root@$HOST "tar xzf - -C /workspace/"
# Very large datasets (>50GB): Cloud storage staging
# Upload to S3/GCS first, then download on instance
ssh -p $PORT root@$HOST "aws s3 sync s3://bucket/dataset/ /workspace/data/"
import subprocess, os
def encrypt_and_upload(local_path, host, port, remote_path, passphrase):
"""Encrypt data before transferring to Vast.ai instance."""
encrypted = f"{local_path}.enc"
# Encrypt with AES-256
subprocess.run([
"openssl", "enc", "-aes-256-cbc", "-salt", "-pbkdf2",
"-in", local_path, "-out", encrypted,
"-pass", f"pass:{passphrase}",
], check=True)
# Transfer encrypted file
subprocess.run([
"scp", "-P", str(port), encrypted,
f"root@{host}:{remote_path}.enc",
], check=True)
# Decrypt on instance
subprocess.run([
"ssh", "-p", str(port), f"root@{host}",
f"openssl enc -aes-256-cbc -d -pbkdf2 "
f"-in {remote_path}.enc -out {remote_path} "
f"-pass pass:{passphrase} && rm {remote_path}.enc"
], check=True)
os.remove(encrypted)
import torch, boto3, os
class CloudCheckpointManager:
def __init__(self, s3_bucket, prefix, save_every=500):
self.s3 = boto3.client("s3")
self.bucket = s3_bucket
self.prefix = prefix
self.save_every = save_every
def save(self, model, optimizer, step, loss):
if step % self.save_every != 0:
return
local_path = f"/tmp/ckpt-{step}.pt"
torch.save({
"step": step, "loss": loss,
"model": model.state_dict(),
"optimizer": optimizer.state_dict(),
}, local_path)
self.s3.upload_file(local_path, self.bucket,
f"{self.prefix}/ckpt-{step}.pt")
os.remove(local_path)
print(f"Checkpoint saved: step {step}, loss {loss:.4f}")
def load_latest(self):
resp = self.s3.list_objects_v2(Bucket=self.bucket, Prefix=self.prefix)
if not resp.get("Contents"):
return None
latest = sorted(resp["Contents"], key=lambda o: o["Key"])[-1]
self.s3.download_file(self.bucket, latest["Key"], "/tmp/latest.pt")
return torch.load("/tmp/latest.pt")
# ALWAYS clean sensitive data before destroying an instance
ssh -p $PORT root@$HOST << 'CLEANUP'
# Remove training data and checkpoints
rm -rf /workspace/data /workspace/checkpoints /workspace/*.pt
# Clear command history
history -c && rm -f ~/.bash_history
# Overwrite sensitive files (optional, for high-security)
find /workspace -name "*.env" -exec shred -u {} \;
echo "Cleanup complete"
CLEANUP
# Then destroy
vastai destroy instance $INSTANCE_ID
| Data Type | On Instance | After Job | Retention |
|---|---|---|---|
| Training data | Decrypt on use | Delete before destroy | Source system only |
| Checkpoints | Local + cloud sync | Keep in cloud storage | 30 days |
| Final model | Local | Upload to model registry | Permanent |
| Logs | Local | Upload to logging service | 90 days |
| Temp files | /tmp | Auto-deleted on destroy | None |
| Error | Cause | Solution |
|---|---|---|
| SCP timeout | Large file or slow network | Use compressed transfer or cloud staging |
| Checkpoint upload fails | S3 credentials not on instance | Pass AWS creds via env vars at instance creation |
| Disk full during training | Insufficient disk allocation | Increase --disk or clean old checkpoints |
| Data left after destroy | Skipped cleanup | Always run cleanup script before vastai destroy |
For enterprise access control, see vastai-enterprise-rbac.
Sensitive data workflow: Encrypt dataset locally, SCP encrypted file to instance, decrypt on-instance, train, save checkpoints to S3, clean and destroy.
Resume after preemption: Load latest checkpoint from S3 on new instance, continue training from last saved step.