Use when managing environment-specific Kubernetes configurations with Kustomize overlays and patches.
Manages environment-specific Kubernetes configurations using Kustomize overlays and patches. Use when applying different configurations for development, staging, and production environments.
/plugin marketplace add TheBushidoCollective/han/plugin install jutsu-testng@hanThis skill is limited to using the following tools:
Master environment-specific Kubernetes configuration management using Kustomize overlays, strategic merge patches, and JSON patches for development, staging, and production environments.
Overlays enable environment-specific customization of Kubernetes resources without duplicating configuration. Each overlay references a base configuration and applies environment-specific patches, transformations, and resource adjustments.
myapp/
├── base/
│ ├── kustomization.yaml
│ ├── deployment.yaml
│ ├── service.yaml
│ ├── configmap.yaml
│ └── ingress.yaml
└── overlays/
├── development/
│ ├── kustomization.yaml
│ ├── replica-patch.yaml
│ └── namespace.yaml
├── staging/
│ ├── kustomization.yaml
│ ├── replica-patch.yaml
│ ├── resource-patch.yaml
│ └── namespace.yaml
└── production/
├── kustomization.yaml
├── replica-patch.yaml
├── resource-patch.yaml
├── hpa.yaml
└── namespace.yaml
# base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
metadata:
name: myapp-base
# Resources to include
resources:
- deployment.yaml
- service.yaml
- configmap.yaml
- ingress.yaml
# Common labels applied to all resources
commonLabels:
app: myapp
managed-by: kustomize
# Common annotations
commonAnnotations:
version: "1.0.0"
team: platform
# Name prefix for all resources
namePrefix: myapp-
# Default namespace (can be overridden in overlays)
namespace: default
# Image transformations
images:
- name: myapp
newName: registry.example.com/myapp
newTag: latest
# base/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp:latest
ports:
- containerPort: 8080
name: http
env:
- name: LOG_LEVEL
valueFrom:
configMapKeyRef:
name: config
key: log-level
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "200m"
livenessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: http
initialDelaySeconds: 5
periodSeconds: 5
# base/service.yaml
apiVersion: v1
kind: Service
metadata:
name: service
spec:
type: ClusterIP
ports:
- port: 80
targetPort: http
protocol: TCP
name: http
selector:
app: myapp
# base/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: config
data:
log-level: "info"
cache-enabled: "true"
timeout: "30"
# base/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: myapp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myapp-service
port:
number: 80
# overlays/development/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
# Reference the base
resources:
- ../../base
- namespace.yaml
# Override namespace
namespace: development
# Development-specific labels
commonLabels:
environment: development
cost-center: engineering
# Development-specific annotations
commonAnnotations:
deployed-by: ci-cd
environment: dev
# Name suffix for development resources
nameSuffix: -dev
# Image overrides for development
images:
- name: myapp
newName: registry.example.com/myapp
newTag: dev-latest
# ConfigMap overrides
configMapGenerator:
- name: config
behavior: merge
literals:
- log-level=debug
- cache-enabled=false
- debug-mode=true
# Replica overrides
replicas:
- name: myapp-deployment
count: 1
# Strategic merge patches
patches:
- path: replica-patch.yaml
target:
kind: Deployment
name: myapp-deployment
# Inline patches
patchesStrategicMerge:
- |-
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
spec:
template:
spec:
containers:
- name: myapp
env:
- name: ENVIRONMENT
value: development
- name: DEBUG
value: "true"
# overlays/development/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: development
labels:
environment: development
team: platform
# overlays/development/replica-patch.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
spec:
replicas: 1
template:
spec:
containers:
- name: myapp
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "128Mi"
cpu: "100m"
# overlays/staging/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
- namespace.yaml
namespace: staging
commonLabels:
environment: staging
cost-center: engineering
commonAnnotations:
deployed-by: ci-cd
environment: staging
nameSuffix: -staging
images:
- name: myapp
newName: registry.example.com/myapp
newTag: staging-v1.2.3
configMapGenerator:
- name: config
behavior: merge
literals:
- log-level=info
- cache-enabled=true
- cache-ttl=300
replicas:
- name: myapp-deployment
count: 2
patches:
- path: replica-patch.yaml
target:
kind: Deployment
name: myapp-deployment
- path: resource-patch.yaml
target:
kind: Deployment
name: myapp-deployment
patchesStrategicMerge:
- |-
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
spec:
template:
metadata:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8080"
spec:
containers:
- name: myapp
env:
- name: ENVIRONMENT
value: staging
- name: METRICS_ENABLED
value: "true"
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- myapp
topologyKey: kubernetes.io/hostname
# overlays/staging/replica-patch.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
# overlays/staging/resource-patch.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
spec:
template:
spec:
containers:
- name: myapp
resources:
requests:
memory: "256Mi"
cpu: "200m"
limits:
memory: "512Mi"
cpu: "500m"
# overlays/production/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
- namespace.yaml
- hpa.yaml
- pdb.yaml
- network-policy.yaml
namespace: production
commonLabels:
environment: production
cost-center: product
compliance: pci
commonAnnotations:
deployed-by: ci-cd
environment: production
backup: "true"
nameSuffix: -prod
images:
- name: myapp
newName: registry.example.com/myapp
newTag: v1.2.3
digest: sha256:abc123...
configMapGenerator:
- name: config
behavior: merge
literals:
- log-level=warn
- cache-enabled=true
- cache-ttl=600
- rate-limit-enabled=true
replicas:
- name: myapp-deployment
count: 5
patches:
- path: replica-patch.yaml
target:
kind: Deployment
name: myapp-deployment
- path: resource-patch.yaml
target:
kind: Deployment
name: myapp-deployment
- path: security-patch.yaml
target:
kind: Deployment
name: myapp-deployment
patchesStrategicMerge:
- |-
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
spec:
template:
metadata:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8080"
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/role: "myapp"
spec:
containers:
- name: myapp
env:
- name: ENVIRONMENT
value: production
- name: METRICS_ENABLED
value: "true"
- name: TRACING_ENABLED
value: "true"
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- myapp
topologyKey: kubernetes.io/hostname
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node.kubernetes.io/instance-type
operator: In
values:
- m5.xlarge
- m5.2xlarge
patchesJson6902:
- target:
group: networking.k8s.io
version: v1
kind: Ingress
name: myapp-ingress
patch: |-
- op: replace
path: /spec/rules/0/host
value: myapp.production.example.com
- op: add
path: /metadata/annotations/cert-manager.io~1cluster-issuer
value: letsencrypt-prod
- op: add
path: /spec/tls
value:
- hosts:
- myapp.production.example.com
secretName: myapp-tls
# overlays/production/replica-patch.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
spec:
replicas: 5
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 2
maxUnavailable: 0
minReadySeconds: 30
# overlays/production/resource-patch.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
spec:
template:
spec:
containers:
- name: myapp
resources:
requests:
memory: "512Mi"
cpu: "500m"
limits:
memory: "1Gi"
cpu: "1000m"
# overlays/production/security-patch.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
spec:
template:
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 1000
seccompProfile:
type: RuntimeDefault
containers:
- name: myapp
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
volumeMounts:
- name: tmp
mountPath: /tmp
- name: cache
mountPath: /app/cache
volumes:
- name: tmp
emptyDir: {}
- name: cache
emptyDir: {}
# overlays/production/hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: myapp-hpa-prod
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: myapp-deployment-prod
minReplicas: 5
maxReplicas: 20
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
behavior:
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Percent
value: 50
periodSeconds: 60
scaleUp:
stabilizationWindowSeconds: 60
policies:
- type: Percent
value: 100
periodSeconds: 30
- type: Pods
value: 2
periodSeconds: 30
selectPolicy: Max
# overlays/production/pdb.yaml
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: myapp-pdb-prod
spec:
minAvailable: 3
selector:
matchLabels:
app: myapp
environment: production
# overlays/production/network-policy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: myapp-network-policy-prod
spec:
podSelector:
matchLabels:
app: myapp
environment: production
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: ingress-nginx
- podSelector:
matchLabels:
app: prometheus
ports:
- protocol: TCP
port: 8080
egress:
- to:
- namespaceSelector:
matchLabels:
name: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
- to:
- podSelector:
matchLabels:
app: database
ports:
- protocol: TCP
port: 5432
patchesJson6902:
- target:
group: apps
version: v1
kind: Deployment
name: myapp-deployment
patch: |-
- op: replace
path: /spec/replicas
value: 10
- op: replace
path: /spec/template/spec/containers/0/image
value: registry.example.com/myapp:v2.0.0
patchesJson6902:
- target:
group: apps
version: v1
kind: Deployment
name: myapp-deployment
patch: |-
- op: add
path: /spec/template/spec/containers/0/env/-
value:
name: NEW_FEATURE_FLAG
value: "true"
- op: add
path: /spec/template/metadata/annotations/sidecar.istio.io~1inject
value: "true"
patchesJson6902:
- target:
group: apps
version: v1
kind: Deployment
name: myapp-deployment
patch: |-
- op: remove
path: /spec/template/spec/containers/0/env/2
- op: remove
path: /spec/template/metadata/annotations/deprecated-annotation
# overlays/production/kustomization.yaml
patches:
- target:
kind: Deployment
labelSelector: "tier=frontend"
patch: |-
apiVersion: apps/v1
kind: Deployment
metadata:
name: not-used
spec:
template:
spec:
containers:
- name: myapp
resources:
limits:
memory: "2Gi"
patches:
- target:
kind: Deployment|StatefulSet
name: myapp-.*
patch: |-
apiVersion: apps/v1
kind: Deployment
metadata:
name: not-used
annotations:
monitoring: "enabled"
patches:
- path: cpu-patch.yaml
target:
kind: Deployment
options:
allowNameChange: true
allowKindChange: false
overlays/
├── us-east-1/
│ ├── development/
│ ├── staging/
│ └── production/
└── eu-west-1/
├── development/
├── staging/
└── production/
# overlays/us-east-1/production/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../../overlays/production
commonLabels:
region: us-east-1
configMapGenerator:
- name: config
behavior: merge
literals:
- region=us-east-1
- s3-bucket=myapp-prod-us-east-1
- cdn-url=https://us-east-1.cdn.example.com
patchesStrategicMerge:
- |-
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
spec:
template:
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: topology.kubernetes.io/region
operator: In
values:
- us-east-1
Use the kustomize-overlays skill when you need to:
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.