Guide for building cloud-native applications following the 12-Factor App methodology with Kubernetes, containers, and modern deployment practices
Provides guidance for building cloud-native applications using the 12-Factor App methodology with Kubernetes and containers. Claude will use this when designing or refactoring applications for cloud deployment, setting up CI/CD pipelines, or troubleshooting deployment issues.
/plugin marketplace add vinnie357/claude-skills/plugin install core@vinnie357This skill inherits all available tools. When active, it can use any tool Claude has access to.
Guide for building scalable, maintainable, and portable cloud-native applications following the 12-Factor App principles and modern extensions.
Use this skill when:
One codebase tracked in revision control, many deploys
myapp-repo/
├── src/
├── config/
├── deploy/
│ ├── staging/
│ ├── production/
│ └── development/
└── Dockerfile
Key principles:
Anti-patterns:
Explicitly declare and isolate dependencies
Declare all dependencies explicitly using package managers:
# Multi-stage build for dependency isolation
FROM node:18-alpine AS dependencies
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
FROM node:18-alpine AS runtime
WORKDIR /app
COPY --from=dependencies /app/node_modules ./node_modules
COPY . .
CMD ["node", "index.js"]
Language-specific examples:
package.json and package-lock.jsonrequirements.txt or Pipfile.lockpom.xml or build.gradlego.mod and go.summix.exs and mix.lockCargo.toml and Cargo.lockKey principles:
Store config in the environment
All configuration should come from environment variables:
# Elixir - config/runtime.exs
import Config
config :my_app, MyApp.Repo,
database: System.get_env("DATABASE_NAME") || "my_app_dev",
username: System.get_env("DATABASE_USER") || "postgres",
password: System.fetch_env!("DATABASE_PASSWORD"),
hostname: System.get_env("DATABASE_HOST") || "localhost",
pool_size: String.to_integer(System.get_env("POOL_SIZE") || "10")
// Node.js
const config = {
database: {
url: process.env.DATABASE_URL,
pool: {
min: parseInt(process.env.DB_POOL_MIN || '2'),
max: parseInt(process.env.DB_POOL_MAX || '10')
}
},
cache: {
ttl: parseInt(process.env.CACHE_TTL || '3600')
}
};
Kubernetes ConfigMaps:
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
DATABASE_HOST: "postgres-service"
CACHE_TTL: "3600"
LOG_LEVEL: "info"
Kubernetes Secrets:
apiVersion: v1
kind: Secret
metadata:
name: app-secrets
type: Opaque
data:
DATABASE_PASSWORD: <base64-encoded>
JWT_SECRET: <base64-encoded>
API_KEY: <base64-encoded>
Anti-patterns:
Treat backing services as attached resources
Connect to all backing services (databases, queues, caches, APIs) via URLs in environment variables:
// Treat all backing services uniformly
const services = {
database: createConnection(process.env.DATABASE_URL),
cache: createRedisClient(process.env.REDIS_URL),
queue: createQueueClient(process.env.RABBITMQ_URL),
storage: createS3Client(process.env.S3_ENDPOINT)
};
Kubernetes Service Discovery:
apiVersion: v1
kind: Service
metadata:
name: redis-service
spec:
selector:
app: redis
ports:
- port: 6379
targetPort: 6379
Key principles:
Strictly separate build and run stages
Three distinct stages:
# GitHub Actions CI/CD Pipeline
name: Build and Deploy
on:
push:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Build Docker image
run: docker build -t myapp:${{ github.sha }} .
- name: Push to registry
run: docker push myapp:${{ github.sha }}
deploy:
needs: build
runs-on: ubuntu-latest
steps:
- name: Deploy to Kubernetes
run: kubectl set image deployment/myapp myapp=myapp:${{ github.sha }}
Key principles:
Execute the app as one or more stateless processes
Application processes should be stateless and share-nothing. Store persistent state in backing services.
// ❌ Bad: In-memory session store
app.use(session({
secret: process.env.SESSION_SECRET,
resave: false
// Uses memory store by default
}));
// ✓ Good: Store session in Redis
app.use(session({
store: new RedisStore({
client: redisClient,
prefix: 'sess:'
}),
secret: process.env.SESSION_SECRET,
resave: false,
saveUninitialized: false
}));
Kubernetes Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 3 # Can scale horizontally
selector:
matchLabels:
app: myapp
template:
spec:
containers:
- name: myapp
image: myapp:latest
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "200m"
Key principles:
Export services via port binding
Applications are self-contained and bind to a port:
const express = require('express');
const app = express();
const port = process.env.PORT || 3000;
app.listen(port, '0.0.0.0', () => {
console.log(`Server running on port ${port}`);
});
# Phoenix endpoint config
config :my_app, MyAppWeb.Endpoint,
http: [port: String.to_integer(System.get_env("PORT") || "4000")],
server: true
Kubernetes Service:
apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
selector:
app: myapp
ports:
- port: 80
targetPort: 3000
type: LoadBalancer
Key principles:
0.0.0.0, not localhostScale out via the process model
Scale by adding more processes (horizontal scaling), not by making processes larger (vertical scaling):
# Horizontal Pod Autoscaler
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: myapp-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: myapp
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
Process types (Procfile concept):
web: node server.js
worker: node worker.js
scheduler: node scheduler.js
Key principles:
Maximize robustness with fast startup and graceful shutdown
const server = app.listen(port, () => {
console.log('Server started');
});
// Graceful shutdown
process.on('SIGTERM', () => {
console.log('SIGTERM received, shutting down gracefully');
server.close(() => {
// Close database connections
db.close();
// Close other connections
redis.quit();
console.log('Process terminated');
process.exit(0);
});
});
Kubernetes lifecycle hooks:
spec:
containers:
- name: myapp
image: myapp:latest
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "sleep 15"]
terminationGracePeriodSeconds: 30
Key principles:
Keep development, staging, and production as similar as possible
Docker Compose for local development:
version: '3.8'
services:
app:
build: .
ports:
- "3000:3000"
environment:
- DATABASE_URL=postgresql://user:pass@db:5432/myapp
- REDIS_URL=redis://redis:6379
depends_on:
- db
- redis
db:
image: postgres:15
environment:
POSTGRES_DB: myapp
POSTGRES_USER: user
POSTGRES_PASSWORD: pass
redis:
image: redis:7-alpine
Key principles:
Treat logs as event streams
Write all logs to stdout/stderr, let the environment handle aggregation:
// Structured logging to stdout
const winston = require('winston');
const logger = winston.createLogger({
format: winston.format.combine(
winston.format.timestamp(),
winston.format.json()
),
transports: [
new winston.transports.Console()
]
});
logger.info('User logged in', {
userId: 123,
ip: '192.168.1.1',
userAgent: 'Mozilla/5.0...'
});
# Elixir structured logging
require Logger
Logger.info("User logged in",
user_id: 123,
ip: "192.168.1.1"
)
Key principles:
Anti-patterns:
Run admin/management tasks as one-off processes
Database migrations, console, one-time scripts:
# Kubernetes Job for database migration
apiVersion: batch/v1
kind: Job
metadata:
name: db-migration
spec:
template:
spec:
containers:
- name: migrate
image: myapp:latest
command: ["npm", "run", "migrate"]
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: app-secrets
key: DATABASE_URL
restartPolicy: OnFailure
# CronJob for scheduled cleanup
apiVersion: batch/v1
kind: CronJob
metadata:
name: data-cleanup
spec:
schedule: "0 2 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: cleanup
image: myapp:latest
command: ["npm", "run", "cleanup"]
restartPolicy: OnFailure
Key principles:
Design and document APIs before implementation:
# OpenAPI specification
openapi: 3.0.0
info:
title: My API
version: v1
paths:
/users:
get:
summary: List users
responses:
'200':
description: Success
Key principles:
Comprehensive observability with metrics, tracing, and monitoring:
# Prometheus ServiceMonitor
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: myapp-monitor
spec:
selector:
matchLabels:
app: myapp
endpoints:
- port: metrics
path: /metrics
Key principles:
Authentication, authorization, and security by design:
// JWT authentication middleware
function authenticateToken(req, res, next) {
const authHeader = req.headers['authorization'];
const token = authHeader && authHeader.split(' ')[1];
if (!token) return res.sendStatus(401);
jwt.verify(token, process.env.JWT_SECRET, (err, user) => {
if (err) return res.sendStatus(403);
req.user = user;
next();
});
}
Key principles:
Validate required configuration at startup:
function validateConfig() {
const required = ['DATABASE_URL', 'JWT_SECRET', 'REDIS_URL'];
const missing = required.filter(key => !process.env[key]);
if (missing.length > 0) {
throw new Error(`Missing required environment variables: ${missing.join(', ')}`);
}
}
// Call before starting server
validateConfig();
Implement health and readiness endpoints:
// Liveness probe
app.get('/health', (req, res) => {
res.status(200).json({
status: 'healthy',
timestamp: new Date().toISOString()
});
});
// Readiness probe
app.get('/ready', async (req, res) => {
try {
await db.ping();
await redis.ping();
res.status(200).json({ status: 'ready' });
} catch (err) {
res.status(503).json({ status: 'not ready', error: err.message });
}
});
Kubernetes probes:
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 10
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 3000
initialDelaySeconds: 5
periodSeconds: 5
Handle backing service failures gracefully:
async function getCachedData(key) {
try {
return await redis.get(key);
} catch (err) {
logger.warn('Redis unavailable, falling back to database', { error: err.message });
return await db.query('SELECT data FROM cache WHERE key = ?', [key]);
}
}
// DON'T
if (process.env.NODE_ENV === 'production') {
// Different behavior
} else {
// Different behavior
}
// DO: Use configuration
const timeout = parseInt(process.env.TIMEOUT || '5000');
// DON'T: Write to local filesystem
fs.writeFile('/tmp/uploads/' + filename, data);
// DO: Use object storage
await s3.putObject({
Bucket: process.env.S3_BUCKET,
Key: filename,
Body: data
});
// DON'T: Store state in memory
const sessions = new Map();
// DO: Use external store
const session = await redis.get(`session:${sessionId}`);
// DON'T: Hardcode service locations
const db = connect('localhost:5432');
// DO: Use environment variables
const db = connect(process.env.DATABASE_URL);
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "200m"
initContainers:
- name: wait-for-db
image: busybox
command: ['sh', '-c', 'until nc -z postgres-service 5432; do sleep 1; done']
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: myapp-pdb
spec:
minAvailable: 1
selector:
matchLabels:
app: myapp
"The twelve-factor methodology can be applied to apps written in any programming language, and which use any combination of backing services (database, queue, memory cache, etc)."
"A twelve-factor app never relies on implicit existence of state on the filesystem. Even if a process has written something to disk, it must assume that file won't be available on the next request."
Design applications from day one to be cloud-native, scalable, and maintainable. The investment in following these principles pays dividends in operational simplicity and development velocity.
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.