From coreweave-pack
Upgrades CoreWeave Kubernetes deployments: migrates GPU types (A100→H100), CUDA versions, inference servers via canary rollouts, with kubectl rollbacks.
npx claudepluginhub jeremylongshore/claude-code-plugins-plus-skills --plugin coreweave-packThis skill is limited to using the following tools:
```yaml
Guides ML workload migration from AWS/GCP/Azure GPUs to CoreWeave, covering cost analysis, Docker containerization, Kubernetes adaptations, deployment strategies, and gotchas.
Upgrades Vast.ai CLI/SDK via pip, detects CLI breaking changes, migrates API endpoints, and updates Docker GPU images for CUDA/PyTorch.
Guides Next.js Cache Components and Partial Prerendering (PPR): 'use cache' directives, cacheLife(), cacheTag(), revalidateTag() for caching, invalidation, static/dynamic optimization. Auto-activates on cacheComponents: true.
Share bugs, ideas, or general feedback.
# Before: A100
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: gpu.nvidia.com/class
operator: In
values: ["A100_PCIE_80GB"]
# After: H100 (update affinity label)
values: ["H100_SXM5"]
| GPU Type | Recommended CUDA | Driver |
|---|---|---|
| A100 | CUDA 12.2+ | 535+ |
| H100 | CUDA 12.4+ | 550+ |
| L40 | CUDA 12.2+ | 535+ |
kubectl rollout undo deployment/my-inference
For CI/CD, see coreweave-ci-integration.