From agent-almanac
Configures Prometheus Alertmanager with routing trees, receivers (Slack, PagerDuty, email), inhibition rules, silences, and notification templates for proactive monitoring and incident alerting.
npx claudepluginhub pjt222/agent-almanacThis skill is limited to using the following tools:
Set up Prometheus alerting rules and Alertmanager for reliable, actionable incident notifications.
Manages Alertmanager rules configurations for DevOps monitoring. Generates production-ready configs, provides step-by-step guidance, best practices, and validation.
Creates alerting rules for Prometheus, Grafana, and PagerDuty with thresholds, routing, escalation, and runbooks. Useful for performance monitoring setup and refinement.
Configures Grafana Alerting, IRM, and SLOs: Grafana-managed/Prometheus/Loki alert rules, notification policies, Slack/PagerDuty/email contacts, silences, on-call rotations, incident workflows, YAML/API provisioning.
Share bugs, ideas, or general feedback.
Set up Prometheus alerting rules and Alertmanager for reliable, actionable incident notifications.
See Extended Examples for complete configuration files and templates.
Install and configure Alertmanager to receive alerts from Prometheus.
Docker Compose deployment (basic structure):
version: '3.8'
services:
alertmanager:
image: prom/alertmanager:v0.26.0
ports:
- "9093:9093"
volumes:
- ./alertmanager.yml:/etc/alertmanager/alertmanager.yml
# ... (see EXAMPLES.md for complete configuration)
Basic Alertmanager configuration (alertmanager.yml excerpt):
global:
resolve_timeout: 5m
slack_api_url: 'https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK'
route:
receiver: 'default-receiver'
group_by: ['alertname', 'cluster', 'service']
group_wait: 30s
group_interval: 5m
repeat_interval: 4h
routes:
- match:
severity: critical
receiver: pagerduty-critical
# ... (see EXAMPLES.md for complete routing, inhibition rules, and receivers)
Configure Prometheus to use Alertmanager (prometheus.yml):
alerting:
alertmanagers:
- static_configs:
- targets:
- alertmanager:9093
timeout: 10s
api_version: v2
Expected: Alertmanager UI accessible at http://localhost:9093, Prometheus "Status > Alertmanagers" shows UP status.
On failure:
docker logs alertmanagercurl http://alertmanager:9093/api/v2/statuscurl -X POST <SLACK_WEBHOOK_URL> -d '{"text":"test"}'amtool check-config alertmanager.ymlCreate alerting rules that fire when conditions are met.
Create alerting rules file (/etc/prometheus/rules/alerts.yml excerpt):
groups:
- name: instance_alerts
interval: 30s
rules:
- alert: InstanceDown
expr: up == 0
for: 5m
labels:
severity: critical
team: infrastructure
annotations:
summary: "Instance {{ $labels.instance }} is down"
description: "{{ $labels.instance }} has been down for >5min."
runbook_url: "https://wiki.example.com/runbooks/instance-down"
- alert: HighCPUUsage
expr: 100 - (avg by(instance) (rate(node_cpu_seconds_total{mode="idle"}[5m])) * 100) > 80
for: 10m
labels:
severity: warning
annotations:
summary: "High CPU usage on {{ $labels.instance }}"
# ... (see EXAMPLES.md for complete alerts)
Alert design best practices:
for duration: Prevents flapping alerts. Use 5-10 minutes for most alerts.Load rules into Prometheus:
# prometheus.yml
rule_files:
- "rules/*.yml"
Validate and reload:
promtool check rules /etc/prometheus/rules/alerts.yml
curl -X POST http://localhost:9090/-/reload
Expected: Alerts visible in Prometheus "Alerts" page, alerts fire when thresholds exceeded, Alertmanager receives fired alerts.
On failure:
promtool check rulesDesign readable, actionable notification messages.
Create template file (/etc/alertmanager/templates/default.tmpl excerpt):
{{ define "slack.default.title" }}
[{{ .Status | toUpper }}] {{ .GroupLabels.alertname }}
{{ end }}
{{ define "slack.default.text" }}
{{ range .Alerts }}
*Alert:* {{ .Labels.alertname }}
*Severity:* {{ .Labels.severity }}
*Summary:* {{ .Annotations.summary }}
{{ if .Annotations.runbook_url }}*Runbook:* {{ .Annotations.runbook_url }}{{ end }}
{{ end }}
{{ end }}
# ... (see EXAMPLES.md for complete email and PagerDuty templates)
Use templates in receivers:
receivers:
- name: 'slack-custom'
slack_configs:
- channel: '#alerts'
title: '{{ template "slack.default.title" . }}'
text: '{{ template "slack.default.text" . }}'
Expected: Notifications formatted consistently, include all relevant context, actionable with runbook links.
On failure:
amtool template test --config.file=alertmanager.yml{{ . | json }} to debug template data structureOptimize alert delivery with intelligent routing rules.
Advanced routing configuration (excerpt):
route:
receiver: 'default-receiver'
group_by: ['alertname', 'cluster', 'service']
group_wait: 30s
routes:
- match:
team: platform
receiver: 'team-platform'
routes:
- match:
severity: critical
receiver: 'pagerduty-platform'
group_wait: 10s
repeat_interval: 15m
continue: true # Also send to Slack
# ... (see EXAMPLES.md for complete routing with time intervals)
Grouping strategies:
# Group by alertname: All HighCPU alerts bundled together
group_by: ['alertname']
# Group by alertname AND cluster: Separate notifications per cluster
group_by: ['alertname', 'cluster']
Expected: Alerts routed to correct teams, grouped logically, timing appropriate for severity.
On failure:
amtool config routes test --config.file=alertmanager.yml --alertname=HighCPU --label=severity=criticalamtool config routes show --config.file=alertmanager.ymlcontinue: true if alert should match multiple routesReduce alert noise with inhibition rules and temporary silences.
Inhibition rules (suppress dependent alerts):
inhibit_rules:
# Cluster down suppresses all node alerts in that cluster
- source_match:
alertname: 'ClusterDown'
severity: 'critical'
target_match_re:
alertname: '(InstanceDown|HighCPU|HighMemory)'
equal: ['cluster']
# Service down suppresses latency and error alerts
- source_match:
alertname: 'ServiceDown'
target_match_re:
alertname: '(HighLatency|HighErrorRate)'
equal: ['service', 'namespace']
# ... (see EXAMPLES.md for more inhibition patterns)
Create silences programmatically:
# Silence during maintenance
amtool silence add \
instance=app-server-1 \
--author="ops-team" \
--comment="Scheduled maintenance" \
--duration=2h
# List and manage silences
amtool silence query
amtool silence expire <SILENCE_ID>
Expected: Inhibition reduces cascade alerts automatically, silences prevent notifications during planned maintenance.
On failure:
Connect Alertmanager to PagerDuty, Opsgenie, Jira, etc.
PagerDuty integration (excerpt):
receivers:
- name: 'pagerduty'
pagerduty_configs:
- routing_key: 'YOUR_INTEGRATION_KEY'
severity: '{{ .CommonLabels.severity }}'
description: '{{ range .Alerts.Firing }}{{ .Annotations.summary }}{{ end }}'
details:
firing: '{{ .Alerts.Firing | len }}'
alertname: '{{ .GroupLabels.alertname }}'
# ... (see EXAMPLES.md for complete integration examples)
Webhook for custom integrations:
receivers:
- name: 'webhook-custom'
webhook_configs:
- url: 'https://your-webhook-endpoint.com/alerts'
send_resolved: true
Expected: Alerts create incidents in PagerDuty, appear in team communication channels, trigger on-call escalations.
On failure:
--log.level=debugfor duration: Alerts without for fire on transient spikes. Always use 5-10 minute windows.['...'] sends individual notifications. Use specific label grouping.setup-prometheus-monitoring - Define metrics and recording rules that feed alerting rulesdefine-slo-sli-sla - Generate SLO burn rate alerts for error budget managementwrite-incident-runbook - Create runbooks linked from alert annotationsbuild-grafana-dashboards - Visualize alert firing history and silence patterns