From coreweave-pack
Guides ML workload migration from AWS/GCP/Azure GPUs to CoreWeave, covering cost analysis, Docker containerization, Kubernetes adaptations, deployment strategies, and gotchas.
npx claudepluginhub jeremylongshore/claude-code-plugins-plus-skills --plugin coreweave-packThis skill is limited to using the following tools:
| Instance | AWS | CoreWeave | Savings |
Migrates Docker-based GPU workloads to/from Vast.ai or between providers like AWS/GCP/Azure, with cost comparisons and Dockerfile adaptations for ML infrastructure.
Provides Kubernetes reference architecture for CoreWeave GPU cloud: ML model serving with vLLM/TGI, shared PVC storage, autoscaling, monitoring, and project structure.
Launches GPU/TPU clusters, training jobs, and inference servers across 25+ clouds, Kubernetes, Slurm using SkyPilot; debugs YAML, optimizes costs.
Share bugs, ideas, or general feedback.
| Instance | AWS | CoreWeave | Savings |
|---|---|---|---|
| 1x A100 80GB | ~$3.60/hr (p4d) | ~$2.21/hr | ~39% |
| 8x A100 80GB | ~$32/hr (p4d.24xl) | ~$17.70/hr | ~45% |
| 1x H100 80GB | ~$6.50/hr (p5) | ~$4.76/hr | ~27% |
# If running on bare EC2/GCE, containerize first
docker build -t inference-server:v1 .
docker push ghcr.io/myorg/inference-server:v1
Key changes from AWS EKS / GKE:
gpu.nvidia.com/class instead of nvidia.com/gpu.productshared-ssd-ord1)Run both old and new infrastructure simultaneously, gradually shift traffic.
Decommission old GPU instances after validation period.
| Issue | Solution |
|---|---|
| Different CUDA drivers | Match container CUDA to CoreWeave node drivers |
| Storage migration | Use rclone or rsync to move data to CoreWeave PVC |
| DNS changes | Update ingress/load balancer DNS |
| IAM differences | CoreWeave uses kubeconfig, not IAM roles |
This completes the CoreWeave skill pack. Start with coreweave-install-auth for new deployments.