From coreweave-pack
Set up local development workflow for CoreWeave GPU deployments. Use when building containers locally, testing YAML manifests, or iterating on model serving configurations before deploying. Trigger with phrases like "coreweave dev setup", "coreweave local testing", "develop for coreweave", "coreweave container build".
npx claudepluginhub flight505/skill-forge --plugin coreweave-packThis skill is limited to using the following tools:
Local development workflow for CoreWeave: build containers, test YAML manifests with dry-run, push to registry, and deploy to CoreWeave CKS.
Guides Next.js Cache Components and Partial Prerendering (PPR): 'use cache' directives, cacheLife(), cacheTag(), revalidateTag() for caching, invalidation, static/dynamic optimization. Auto-activates on cacheComponents: true.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Share bugs, ideas, or general feedback.
Local development workflow for CoreWeave: build containers, test YAML manifests with dry-run, push to registry, and deploy to CoreWeave CKS.
coreweave-install-auth setupmy-inference-service/
├── Dockerfile
├── src/
│ ├── server.py # Inference server code
│ └── model_config.py # Model configuration
├── k8s/
│ ├── deployment.yaml # GPU deployment manifest
│ ├── service.yaml # Service and ingress
│ └── hpa.yaml # Horizontal pod autoscaler
├── scripts/
│ ├── build.sh # Build and push container
│ └── deploy.sh # Deploy to CoreWeave
├── .env.local
└── Makefile
# Build locally
docker build -t my-inference:latest .
# Tag for registry
docker tag my-inference:latest ghcr.io/myorg/my-inference:v1.0.0
# Push
docker push ghcr.io/myorg/my-inference:v1.0.0
# Dry-run against CoreWeave cluster
kubectl apply -f k8s/deployment.yaml --dry-run=server
# Diff against current state
kubectl diff -f k8s/deployment.yaml
# Check resource requests match available GPU types
kubectl get nodes -l gpu.nvidia.com/class=A100_PCIE_80GB --no-headers | wc -l
kubectl apply -f k8s/
kubectl rollout status deployment/my-inference
kubectl logs -f deployment/my-inference
| Error | Cause | Solution |
|---|---|---|
| Image pull backoff | Wrong registry or no pull secret | Create imagePullSecret |
| CUDA mismatch | Driver vs container version | Match CUDA version to node drivers |
| Dry-run fails | Invalid manifest | Fix YAML syntax |
See coreweave-sdk-patterns for inference client patterns.