From kube-dc
Check organization and project resource quota usage before deploying workloads. Covers org-level quota (CPU, memory, storage, pods, public IPv4, object storage) and per-project usage via Organization and Project status fields. Use this before creating VMs, apps, databases, clusters, or EIPs to avoid quota-exceeded errors.
npx claudepluginhub kube-dc/kube-dc-public --plugin kube-dcThis skill uses the workspace's default tool permissions.
Run a quota check **before**:
Checks and manages Azure quotas and usage via CLI for deployment planning, capacity validation, region selection, and troubleshooting limits.
Implements Kubernetes cloud cost optimization using Kubecost for visibility, right-sizing, HPA/VPA autoscaling, spot instances, and resource quotas. Use for over-provisioning, misaligned resources, or cost allocation.
Estimates AWS infrastructure costs using Helm charts for components like Access Manager and Midaz, with tiered or custom TPS sizing, shared/dedicated options, and environment selection.
Share bugs, ideas, or general feedback.
Run a quota check before:
Also use for troubleshooting when workloads fail with exceeded quota errors.
The Organization resource exposes aggregated usage across all projects in .status.quotaUsage. Values are refreshed every 5–7 minutes by the platform controller.
kubectl get organization {org} -n {org} \
-o jsonpath='{.status.quotaUsage}' | jq .
Expected output:
{
"cpu": { "used": "18.975", "hard": "26" },
"memory": { "used": "63.6Gi", "hard": "70Gi" },
"storage": { "used": "443.2Gi","hard": "460Gi" },
"pods": { "used": "33", "hard": "500" },
"publicIPv4": { "used": "3", "hard": "3" },
"objectStorage": { "used": "", "hard": "500Gi" },
"lastUpdated": "2026-04-07T20:55:42Z"
}
Field reference:
cpu — cores (decimal). e.g. "18.975" = 18,975 millicoresmemory / storage — GiB consumed vs plan hard limitpublicIPv4 — count of externalNetworkType: public EIPs across all org namespacesobjectStorage — hard limit from plan; used is populated asynchronouslylastUpdated — timestamp of last controller refresh (up to 7 min old)Check a single field:
# CPU remaining
kubectl get organization {org} -n {org} \
-o jsonpath='{.status.quotaUsage.cpu}' | jq .
# → { "hard": "26", "used": "18.975" }
Each Project resource exposes per-namespace usage in .status.quotaUsage:
kubectl get project {project} -n {org} \
-o jsonpath='{.status.quotaUsage}' | jq .
Expected output:
{
"cpu": { "used": "6.72", "hard": "26" },
"memory": { "used": "16.824Gi","hard": "70Gi" },
"storage": { "used": "147.4Gi", "hard": "460Gi" },
"pods": { "used": "12", "hard": "500" },
"perProjectQuotaSet": false,
"lastUpdated": "2026-04-07T20:55:00Z"
}
hard shows the org-wide limit when perProjectQuotaSet: false, or the per-project cap when set by an adminperProjectQuotaSet: true means this project has an explicit ResourceQuota/project-quota that may be tighter than the org limitAll projects at a glance:
kubectl get projects -n {org} \
-o custom-columns='PROJECT:.metadata.name,CPU_USED:.status.quotaUsage.cpu.used,CPU_HARD:.status.quotaUsage.cpu.hard,MEM_USED:.status.quotaUsage.memory.used,MEM_HARD:.status.quotaUsage.memory.hard'
quotaUsage is refreshed every 5–7 min. For the live enforcement state (what Kubernetes is actively enforcing right now), query the underlying ResourceQuota objects:
# All quotas in a project namespace
kubectl get resourcequota -n {org}-{project}
# Detailed breakdown with usage bars
kubectl describe resourcequota -n {org}-{project}
The hrq.hnc.x-k8s.io quota is the organization-wide HNC propagated limit. The project-quota quota (if present) is the per-project cap set by an admin.
Note:
kubectl describe resourcequotashows raw Kubernetes units — millicores for CPU (e.g.6720m) and bytes for memory/storage (e.g.18064129473). Use.status.quotaUsageon theProjectorOrganizationresource for human-readable values.
cpu: used=6.72 / hard=26 → 19.28 cores free ✅
memory: used=16Gi / hard=70Gi → 54Gi free ✅
storage: used=147Gi / hard=460Gi → 313Gi free ✅
Proceed with the deployment.
Any field where used / hard > 0.8 (80%) is worth flagging before proceeding with large workloads.
publicIPv4: used=3 / hard=3 → 0 free ❌
Options:
When a workload fails with:
exceeded quota: plan-quota, requested: requests.cpu=500m,
used: requests.cpu=25500m, limited: requests.cpu=26
kubectl get organization {org} -n {org} -o jsonpath='{.status.quotaUsage}' | jq .
kubectl get projects -n {org} \
-o custom-columns='PROJECT:.metadata.name,CPU:.status.quotaUsage.cpu.used,MEM:.status.quotaUsage.memory.used'
| Resource | Dev Pool | Pro Pool | Scale Pool |
|---|---|---|---|
| CPU (requests) | 4 cores | 8 cores | 16 cores |
| Memory | 8 Gi | 24 Gi | 56 Gi |
| Storage | 60 Gi | 160 Gi | 320 Gi |
| Pods | 100 | 200 | 500 |
| Public IPv4 | 1 | 1 | 3 |
| Object Storage | 20 Gi | 100 Gi | 500 Gi |
Turbo x1 adds: +2 CPU, +4 Gi RAM, +20 Gi storage (€9/mo). Turbo x2 adds: +4 CPU, +8 Gi RAM, +40 Gi storage (€16/mo).
publicIPv4 quota is hard — no burst. Check before allocating any externalNetworkType: public EIPquotaUsage.lastUpdated may be up to 7 minutes stale; use kubectl describe resourcequota for real-time data when timing matters