From coreweave-pack
Migrate ML workloads from AWS/GCP/Azure to CoreWeave GPU cloud. Use when moving inference services from hyperscaler GPU instances, migrating training pipelines, or evaluating CoreWeave vs cloud GPU costs. Trigger with phrases like "migrate to coreweave", "coreweave migration", "move from aws to coreweave", "coreweave vs aws gpu".
npx claudepluginhub flight505/skill-forge --plugin coreweave-packThis skill is limited to using the following tools:
| Instance | AWS | CoreWeave | Savings |
Guides Next.js Cache Components and Partial Prerendering (PPR): 'use cache' directives, cacheLife(), cacheTag(), revalidateTag() for caching, invalidation, static/dynamic optimization. Auto-activates on cacheComponents: true.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Share bugs, ideas, or general feedback.
| Instance | AWS | CoreWeave | Savings |
|---|---|---|---|
| 1x A100 80GB | ~$3.60/hr (p4d) | ~$2.21/hr | ~39% |
| 8x A100 80GB | ~$32/hr (p4d.24xl) | ~$17.70/hr | ~45% |
| 1x H100 80GB | ~$6.50/hr (p5) | ~$4.76/hr | ~27% |
# If running on bare EC2/GCE, containerize first
docker build -t inference-server:v1 .
docker push ghcr.io/myorg/inference-server:v1
Key changes from AWS EKS / GKE:
gpu.nvidia.com/class instead of nvidia.com/gpu.productshared-ssd-ord1)Run both old and new infrastructure simultaneously, gradually shift traffic.
Decommission old GPU instances after validation period.
| Issue | Solution |
|---|---|
| Different CUDA drivers | Match container CUDA to CoreWeave node drivers |
| Storage migration | Use rclone or rsync to move data to CoreWeave PVC |
| DNS changes | Update ingress/load balancer DNS |
| IAM differences | CoreWeave uses kubeconfig, not IAM roles |
This completes the CoreWeave skill pack. Start with coreweave-install-auth for new deployments.