From agent-almanac
Conducts blameless post-mortem analysis after incidents: reconstructs timelines from logs/metrics/alerts using bash tools, identifies systemic factors, generates action items. Use post-production outage, near-miss, or recurring issues.
npx claudepluginhub pjt222/agent-almanacThis skill is limited to using the following tools:
Lead a blameless post-mortem to learn from incidents and improve system resilience.
Structures blameless post-mortems with incident timelines, impact assessment, root cause analysis, response evaluation, action items, and lessons learned. Useful after production incidents or outages.
Facilitates blameless incident postmortems: root cause analysis, timeline reconstruction, action prioritization, learning culture. Activates on postmortem, outage, root cause mentions.
Writes blameless postmortems with root cause analysis, timelines, executive summaries, and action items for incident reviews and response improvement.
Share bugs, ideas, or general feedback.
Lead a blameless post-mortem to learn from incidents and improve system resilience.
Gather all artifacts from the incident:
# Export relevant logs (adjust timerange)
kubectl logs deployment/api-service \
--since-time="2025-02-09T10:00:00Z" \
--until-time="2025-02-09T11:30:00Z" > incident-logs.txt
# Export Prometheus metrics snapshot
curl -G 'http://prometheus:9090/api/v1/query_range' \
--data-urlencode 'query=rate(http_requests_total{job="api"}[5m])' \
--data-urlencode 'start=2025-02-09T10:00:00Z' \
--data-urlencode 'end=2025-02-09T11:30:00Z' \
--data-urlencode 'step=15s' > metrics.json
# Export alert history
amtool alert query --within=2h alertname="HighErrorRate" --output json > alerts.json
Expected: Logs, metrics, and alerts covering the full incident timeline.
On failure: If data is incomplete, note gaps in the report. Set up longer retention for next time.
Create a chronological reconstruction:
## Timeline (all times UTC)
| Time | Event | Source | Actor |
|----------|-------|--------|-------|
| 10:05:23 | First 5xx errors appear | nginx access logs | - |
| 10:06:45 | High error rate alert fires | Prometheus | - |
| 10:08:12 | On-call engineer paged | PagerDuty | System |
| 10:12:00 | Engineer acknowledges alert | PagerDuty | @alice |
| 10:15:30 | Database connection pool exhausted | app logs | - |
| 10:18:45 | Database queries identified as slow | pganalyze | @alice |
| 10:22:10 | Cache layer deployed as mitigation | kubectl | @alice |
| 10:35:00 | Error rate returns to normal | Prometheus | - |
| 10:40:00 | Incident marked resolved | PagerDuty | @alice |
Expected: A clear, minute-by-minute sequence showing what happened and when.
On failure: Timestamp mismatches. Ensure all systems use NTP and log in UTC.
Use the Five Whys or fishbone analysis:
## Contributing Factors
### Immediate Cause
- Database connection pool exhausted (max 20 connections)
- Query introduced in v2.3.0 deployment lacked index
### Contributing Factors
1. **Monitoring Gap**: Connection pool utilization not monitored
2. **Testing Gap**: Load testing didn't include new query pattern
3. **Runbook Gap**: No documented procedure for DB connection issues
4. **Capacity Planning**: Pool size unchanged despite 3x traffic growth
### Systemic Issues
- No pre-deployment query plan review
- Database alerts only fire on total failure, not degradation
Expected: Multiple layers of causation identified, avoiding blame.
On failure: If analysis stops at "engineer made a mistake", dig deeper. What allowed that mistake?
Create concrete, trackable improvements:
## Action Items
| ID | Action | Owner | Deadline | Priority |
|----|--------|-------|----------|----------|
| AI-001 | Add connection pool metrics to Grafana | @bob | 2025-02-16 | High |
| AI-002 | Create runbook: DB connection saturation | @alice | 2025-02-20 | High |
| AI-003 | Add DB query plan check to CI/CD | @charlie | 2025-03-01 | Medium |
| AI-004 | Review and adjust connection pool size | @dan | 2025-02-14 | High |
| AI-005 | Implement DB slow query alerts (<100ms) | @bob | 2025-02-23 | Medium |
| AI-006 | Add load testing for new query patterns | @charlie | 2025-03-15 | Low |
Expected: Each action has an owner, deadline, and clear deliverable.
On failure: Vague actions like "improve testing" won't get done. Make specific.
Use this template structure:
# Post-Mortem: API Service Degradation (2025-02-09)
**Date**: 2025-02-09
**Duration**: 1h 35min (10:05 - 11:40 UTC)
**Severity**: P1 (Critical service degraded)
**Authors**: @alice, @bob
**Reviewed**: 2025-02-10
## Summary
The API service experienced elevated error rates (40% of requests) due to
database connection pool exhaustion. Service was restored by deploying a
cache layer. No data loss occurred.
## Impact
- 40,000 failed requests over 1.5 hours
- 2,000 customers affected
- Revenue impact: ~$5,000 (estimated)
## Root Cause
Query introduced in v2.3.0 deployment performed a full table scan due to
missing index. Under increased load, this saturated the connection pool.
[... timeline, contributing factors, action items as above ...]
## What Went Well
- Alert fired within 90 seconds of first errors
- Mitigation deployed quickly (10 minutes from page to fix)
- Communication to customers was clear and timely
## Lessons Learned
- Database monitoring is insufficient; need connection-level metrics
- Load testing must cover new query patterns, not just volume
- Connection pool sizing hasn't kept pace with traffic growth
## Prevention
See Action Items above.
Expected: Report shared with team and stakeholders within 48 hours of incident.
On failure: If report delays exceed 1 week, insights grow stale. Prioritize post-mortems.
Track action item progress:
# Create GitHub issues from action items
gh issue create --title "AI-001: Add connection pool metrics" \
--body "From post-mortem PM-2025-02-09. Owner: @bob. Deadline: 2025-02-16" \
--label "post-mortem,observability" \
--assignee bob
# Set up recurring reminder
# Add to team calendar: Weekly review of open post-mortem items
Expected: Action items tracked in project management tool, reviewed weekly.
On failure: If action items languish, incidents will recur. Assign executive sponsor for high-priority items.
write-incident-runbook - create runbooks referenced during incidentsconfigure-alerting-rules - improve alerts based on post-mortem findings