From daily-carry
Analyzes a codebase and prepares it for OtterStack deployment. Use when preparing docker-compose projects, checking OtterStack compatibility, scanning for environment variables, validating compose files, or setting up zero-downtime deployments. Triggers on "prepare for otterstack", "validate compose file", "check deployment readiness", or "scan env vars".
npx claudepluginhub jayteealao/agent-skills --plugin daily-carryThis skill uses the workspace's default tool permissions.
Analyze a codebase and validate it's ready for OtterStack deployment by checking Docker Compose compatibility, scanning for environment variables, and detecting common failure patterns.
Verifies tests pass on completed feature branch, presents options to merge locally, create GitHub PR, keep as-is or discard; executes choice and cleans up worktree.
Guides root cause investigation for bugs, test failures, unexpected behavior, performance issues, and build failures before proposing fixes.
Writes implementation plans from specs for multi-step tasks, mapping files and breaking into TDD bite-sized steps before coding.
Analyze a codebase and validate it's ready for OtterStack deployment by checking Docker Compose compatibility, scanning for environment variables, and detecting common failure patterns.
Run these three checks to verify OtterStack readiness:
# 1. Scan for environment variables
grep -rE '\$\{[A-Z_]+\}|\$[A-Z_]+' docker-compose.yml
# 2. Check compose compatibility
grep -E "container_name:|env_file:" docker-compose.yml # Should be empty
# 3. Validate syntax
docker compose config --quiet
If all checks pass → ready to deploy. If issues found → follow the detailed workflow below.
Different languages use different patterns for environment variables:
Node.js / TypeScript:
grep -r "process\.env\." --include="*.js" --include="*.ts"
Python:
grep -r "os\.getenv\|os\.environ" --include="*.py"
Ruby:
grep -r "ENV\[" --include="*.rb"
Go:
grep -r "os\.Getenv" --include="*.go"
Find all variables referenced in compose file:
grep -oE '\$\{[A-Z_][A-Z0-9_]*\}' docker-compose.yml | sort -u
Detect networks defined in the Docker Compose file to ensure proper container connectivity:
Find network definitions:
grep -A 5 "^networks:" docker-compose.yml
Extract network names:
# Get all network names from the networks section
grep -A 10 "^networks:" docker-compose.yml | grep -E "^ [a-z]" | awk '{print $1}' | sed 's/:$//'
Identify default network:
{project}_defaultCheck service network attachments:
# Find services that specify networks
grep -B 5 "networks:" docker-compose.yml | grep -E "^ [a-z]" | awk '{print $1}' | sed 's/:$//'
Network configuration requirements:
${NETWORK_NAME:-app-network}Example network configuration:
services:
web:
networks:
- ${NETWORK_NAME:-app-network}
api:
networks:
- ${NETWORK_NAME:-app-network}
networks:
app-network:
name: ${NETWORK_NAME:-app-network}
external: false
Check for ARG and ENV declarations:
grep -E "^(ENV|ARG)\s+" Dockerfile
For each variable found:
Check:
grep "container_name:" docker-compose.yml
Why it fails: OtterStack creates unique container names per deployment (e.g., myapp-abc1234-web-1). Hardcoded names prevent parallel deployments and zero-downtime updates.
Fix: Remove all container_name: directives.
Before:
services:
web:
container_name: myapp-web # ❌ Remove this
image: myapp:latest
After:
services:
web:
# ✅ Let Docker Compose generate names
image: myapp:latest
Check:
grep "env_file:" docker-compose.yml
Why it fails: OtterStack passes --env-file to Docker Compose for variable substitution in the compose file itself. Variables must be in the environment: section to be injected into containers.
Fix: Move to environment: section with variable substitution.
Before:
services:
web:
env_file: .env # ❌ This doesn't work with OtterStack
After:
services:
web:
environment: # ✅ Use environment section
DATABASE_URL: ${DATABASE_URL}
SECRET_KEY: ${SECRET_KEY}
LOG_LEVEL: ${LOG_LEVEL:-INFO} # With default
Check:
grep "traefik.http.routers.*.priority" docker-compose.yml
Why it fails: OtterStack manages Traefik priority labels automatically for zero-downtime deployments. Static priorities conflict with this mechanism.
Fix: Remove priority labels, keep other Traefik labels.
Before:
labels:
- "traefik.http.routers.myapp.rule=Host(`example.com`)"
- "traefik.http.routers.myapp.priority=100" # ❌ Remove this
After:
labels:
- "traefik.http.routers.myapp.rule=Host(`example.com`)"
# ✅ OtterStack manages priorities automatically
Check:
grep -A5 "healthcheck:" docker-compose.yml
Why it matters: OtterStack waits for containers to be healthy before routing traffic. Without health checks, containers are immediately considered healthy (which may not be accurate).
Best practice: Define explicit health checks.
Example:
services:
web:
healthcheck:
test: ["CMD", "curl", "-f", "http://127.0.0.1:8080/health"]
interval: 10s
timeout: 3s
retries: 3
start_period: 30s
Critical: Use 127.0.0.1 not localhost to avoid IPv6 issues (see Common Failures below).
Check:
docker compose config --quiet
If this command fails, the compose file has syntax errors that must be fixed before deployment.
Check:
grep -E "node-gyp|native|binding|better-sqlite3|bcrypt|sharp" package.json
Problem: Native modules compiled on your dev machine won't work in the production container due to different architectures.
Solution: Use multi-stage build and rebuild in production stage.
Example fix in Dockerfile:
# Production stage
FROM node:20-slim
# Install build tools for native modules
RUN apt-get update && \
apt-get install -y build-essential python3 && \
apt-get clean
# Copy node_modules from builder
COPY --from=builder /build/node_modules ./node_modules
# Rebuild native modules for production architecture
RUN npm rebuild better-sqlite3
# Rest of dockerfile...
Check:
grep -A2 "volumes:" docker-compose.yml | grep -E "\.db|/data"
Problem: Container user may not have write permissions to database directory.
Solution: Use named volumes OR ensure directory ownership in Dockerfile.
Named volume approach (recommended):
volumes:
- db-data:/app/data # Named volume with correct permissions
volumes:
db-data:
name: myapp-db-data
Dockerfile ownership approach:
# Create directories with correct ownership
RUN mkdir -p /app/data && chown -R app:app /app/data
# Switch to non-root user
USER app
Check:
grep "COPY.*migrations\|COPY.*prisma\|COPY.*db" Dockerfile
Problem: Migration files not copied to container or copied to wrong location.
Solution: Ensure migrations are copied to where your application expects them.
Example:
# If your app looks for migrations relative to dist/index.js:
COPY src/migrations ./dist/migrations
# Not:
COPY src/migrations ./src/migrations # ❌ Wrong location
Check:
grep -A3 "healthcheck:" docker-compose.yml | grep "localhost"
Problem: BusyBox wget and some curl versions try IPv6 (::1) first when resolving localhost, but app may only bind to IPv4 (0.0.0.0).
Solution: Use 127.0.0.1 instead of localhost in health checks.
Before:
healthcheck:
test: ["CMD", "wget", "--spider", "http://localhost:80/health"] # ❌
After:
healthcheck:
test: ["CMD", "wget", "--spider", "http://127.0.0.1:80/health"] # ✅
Check:
grep -A2 "build:" docker-compose.yml | grep -v "context:"
Problem: Build may fail or use wrong directory if context not explicit.
Solution: Always specify context: and dockerfile:.
Before:
build: . # ❌ Implicit context
After:
build: # ✅ Explicit context
context: .
dockerfile: Dockerfile
Check:
# Check if networks are defined
grep "^networks:" docker-compose.yml
# Check if services specify network
grep -A 2 "services:" docker-compose.yml | grep "networks:"
Problem: Services on different networks (or default networks) may have connectivity issues in complex deployments. OtterStack needs to know which network containers communicate on.
Solution: Define an explicit network and use variable substitution for flexibility.
Before:
services:
web:
image: myapp:latest
# No network specified - uses auto-generated default
After:
services:
web:
image: myapp:latest
networks:
- ${NETWORK_NAME:-app-network}
networks:
app-network:
name: ${NETWORK_NAME:-app-network}
external: false
For services that need to be publicly accessible, detect existing Traefik configuration and prepare for enhanced labels:
Check for Traefik-enabled services:
grep -B 5 "traefik.enable" docker-compose.yml | grep -E "^ [a-z].*:" | sed 's/:$//'
Extract router names:
grep "traefik.http.routers" docker-compose.yml | sed -E 's/.*traefik\.http\.routers\.([^.]+)\..*/\1/' | sort -u
Identify exposed services: For each service with Traefik labels:
${API_DOMAIN})Services exposed via Traefik should have these labels at minimum:
labels:
- "traefik.enable=true"
# Routing
- "traefik.http.routers.{service}.rule=Host(`${SERVICE_DOMAIN}`)"
- "traefik.http.routers.{service}.entrypoints=web,websecure"
# TLS
- "traefik.http.routers.{service}.tls=true"
- "traefik.http.routers.{service}.tls.certresolver=myresolver"
# Load balancer
- "traefik.http.services.{service}.loadbalancer.server.port={PORT}"
# CrowdSec middleware (security)
- "traefik.http.routers.{service}.middlewares=crowdsec-{service}@docker"
- "traefik.http.middlewares.crowdsec-{service}.plugin.crowdsec-bouncer.enabled=true"
- "traefik.http.middlewares.crowdsec-{service}.plugin.crowdsec-bouncer.crowdseclapikey=${CROWDSEC_API_KEY}"
For each exposed service, these environment variables are required:
${SERVICE}_DOMAIN - The domain name for the service (e.g., API_DOMAIN=api.example.com)
aperture.example.com, api.myapp.ioCROWDSEC_API_KEY - CrowdSec bouncer API key for security middleware
NETWORK_NAME - The Docker network name for Traefik communication
app-networkGenerate a readiness report following this template:
## OtterStack Readiness Report for [Project Name]
### ✅ Compatible Checks
- Docker Compose syntax validation passed
- Health checks defined for all services
- Uses environment: section for variables
- No hardcoded container names
### 🌐 Networks Detected
**Default network:** `app-network`
**All networks:**
- `app-network` (default)
- `traefik-network` (external, for Traefik communication)
**Service attachments:**
- `web` → app-network, traefik-network
- `api` → app-network, traefik-network
- `db` → app-network (internal only)
**Recommendations:**
- Add `NETWORK_NAME` environment variable for flexibility
- Ensure Traefik is on `traefik-network` for routing
### 🔒 Traefik Exposure
**Exposed services:**
1. **API Service** (`api`)
- Router: `aperture-api`
- Port: 8080
- Domain variable: `API_DOMAIN`
- CrowdSec: Enabled
2. **Web Service** (`web`)
- Router: `aperture-web`
- Port: 3000
- Domain variable: `WEB_DOMAIN`
- CrowdSec: Enabled
**Required for exposure:**
- `API_DOMAIN` - Domain for API service (e.g., api.example.com)
- `WEB_DOMAIN` - Domain for web service (e.g., app.example.com)
- `CROWDSEC_API_KEY` - CrowdSec bouncer key (shared)
- `NETWORK_NAME` - Network for Traefik communication
**Traefik configuration:**
- TLS enabled with Let's Encrypt (certresolver: myresolver)
- HTTP and HTTPS entrypoints
- CrowdSec bouncer middleware for DDoS protection
### ⚠️ Issues Found
1. **Container name conflict** (docker-compose.yml:15)
- Found: `container_name: myapp-web`
- Fix: Remove this line
2. **Health check IPv6 issue** (docker-compose.yml:23)
- Found: `test: ["CMD", "curl", "http://localhost:8080/health"]`
- Fix: Change to `http://127.0.0.1:8080/health`
3. **Native module needs rebuild** (package.json)
- Found: better-sqlite3 in dependencies
- Fix: Add `RUN npm rebuild better-sqlite3` to Dockerfile
### Environment Variables
**Required (no defaults):**
- `DATABASE_URL` - PostgreSQL connection string
- `SECRET_KEY` - Application secret for sessions
- `API_TOKEN` - External service authentication
**Optional (with defaults):**
- `LOG_LEVEL` - Logging level (default: INFO)
- `PORT` - Application port (default: 8080)
- `NODE_ENV` - Environment mode (default: production)
**Sensitive (never commit):**
- `DATABASE_URL` contains password
- `SECRET_KEY` is cryptographic secret
- `API_TOKEN` is authentication credential
### Recommended Fixes
#### 1. Update docker-compose.yml
```yaml
# Remove line 15:
- container_name: myapp-web
# Update health check (line 23):
healthcheck:
test: ["CMD", "curl", "-f", "http://127.0.0.1:8080/health"]
interval: 10s
start_period: 30s
Add after installing dependencies:
# Rebuild native modules for production container
RUN npm rebuild better-sqlite3
On OtterStack server:
otterstack env set myapp DATABASE_URL "postgresql://user:pass@host/db"
otterstack env set myapp SECRET_KEY "your-secret-key-here"
otterstack env set myapp API_TOKEN "your-api-token"
otterstack deploy myappOnce all boxes are checked, proceed with deployment.
## Success Criteria
You're ready to deploy when:
✅ **Compose file passes all checks:**
- No `container_name:` directives
- Uses `environment:` section (not `env_file:`)
- No static Traefik priorities
- Health checks use `127.0.0.1` not `localhost`
- `docker compose config --quiet` succeeds
✅ **Common failures addressed:**
- Native modules have rebuild step in Dockerfile
- Migration files copied to correct paths
- Database directories have proper permissions
✅ **Environment variables documented:**
- All required variables identified
- Sensitive variables flagged
- Default values noted
✅ **Fixes committed:**
- Changes pushed to git repository
- Ready for OtterStack to pull latest commit
## Example: Preparing Aperture
**Scan Results:**
```bash
# Environment variables found
grep -rE '\$\{[A-Z_]+\}' docker-compose.yml
# Found: DATABASE_URL, SECRET_KEY, API_TOKEN, LOG_LEVEL, PORT
# Compatibility checks
grep "container_name:" docker-compose.yml
# Found: container_name: aperture-gateway (line 12)
# Found: container_name: aperture-web (line 45)
grep "env_file:" docker-compose.yml
# No issues - uses environment: section ✅
# Common failures
grep -E "better-sqlite3" package.json
# Found: better-sqlite3 in dependencies
grep "COPY.*migrations" Dockerfile
# Found: COPY src/migrations ./src/migrations
# Issue: App looks in ./dist/migrations at runtime
Fixes Applied:
container_name: directivesRUN npm rebuild better-sqlite3 to DockerfileCOPY src/migrations ./dist/migrations127.0.0.1 instead of localhostResult: Successful deployment after applying these fixes.
This skill is automatically invoked during Phase 2: Preparation of the /deploy-otterstack command.
When invoked from the deployment orchestration command, this skill:
The orchestration command uses the following outputs from this skill:
Blocking Issues:
Environment Variables:
Readiness Status:
ready: true/false - Whether project can proceed to deploymentblocking_issue_count - Number of critical issues foundwarning_count - Number of non-critical warnings/deploy-otterstack invoked
↓
Invoke prepare-otterstack-deployment skill
↓
Generate readiness report
↓
If blocking issues found:
- Show user the issues and fixes
- Prompt: Review / Continue anyway / Cancel
- If Cancel: Exit deployment
- If Continue: Proceed with warnings
↓
Pass required_env_vars[] to Setup phase
↓
Setup phase uses this to configure environment
The command parses these sections from the skill's output:
Some issues can be automatically identified and fixed by the orchestration command:
| Issue Type | Auto-Fixable | Action |
|---|---|---|
| Missing env vars | Partial | Prompt user during env scan |
| container_name present | No | User must edit compose file |
| env_file present | No | User must migrate to environment: section |
| Static Traefik priorities | No | User must remove priority labels |
| Health check uses localhost | No | User must change to 127.0.0.1 |
| Native modules need rebuild | No | User must update Dockerfile |
| IPv6/IPv4 conflicts | No | User must update health checks |
/deploy-otterstack - Full orchestration command that uses this skill in Phase 2