From agent-almanac
Implements AI-powered anomaly detection for operational metrics using Isolation Forest, Prophet, LSTM time series analysis, alert correlation, and root cause analysis to reduce alert fatigue.
npx claudepluginhub pjt222/agent-almanacThis skill is limited to using the following tools:
> See [Extended Examples](references/EXAMPLES.md) for complete configuration files and templates.
Guides Grafana Cloud AI/ML setup: Assistant for natural language queries/dashboards/incidents, Dynamic Alerting for ML forecasting/outliers, Sift/Knowledge Graph for root cause analysis, LLM plugins for OpenAI/Anthropic integration.
Provides step-by-step guidance and generates code for anomaly detection in data analytics, including SQL queries, data visualization, statistical analysis, and business intelligence.
Forecasts infrastructure metrics like CPU, memory, disk, costs using Prophet/statsmodels for capacity planning and scaling. Visualizes predictions in Grafana with alerts.
Share bugs, ideas, or general feedback.
See Extended Examples for complete configuration files and templates.
Apply machine learning to detect anomalies in operational metrics, correlate alerts, and reduce false positives.
Install dependencies and prepare time series data for analysis.
# Create virtual environment
python -m venv venv
source venv/bin/activate
# Install anomaly detection libraries
pip install prophet scikit-learn pandas numpy
pip install tensorflow keras # for LSTM models
pip install pyod # Python Outlier Detection library
pip install statsmodels # for statistical methods
pip install prometheus-api-client # if using Prometheus
# Visualization
pip install plotly matplotlib seaborn
Load and prepare data:
# aiops/data_loader.py
import pandas as pd
import numpy as np
from datetime import datetime, timedelta
from typing import List, Dict
import logging
logging.basicConfig(level=logging.INFO)
# ... (see EXAMPLES.md for complete implementation)
Expected: Time series data loaded with regular intervals, missing values handled, features engineered for ML models.
On failure: If Prometheus connection fails, verify URL and network access, if data gaps exist use forward-fill or interpolation, ensure timestamp column is datetime type, check for memory issues with large date ranges (process in chunks).
Detect anomalies using unsupervised Isolation Forest algorithm.
# aiops/isolation_forest_detector.py
from sklearn.ensemble import IsolationForest
from sklearn.preprocessing import StandardScaler
import pandas as pd
import numpy as np
from typing import Dict, List
import joblib
# ... (see EXAMPLES.md for complete implementation)
Expected: Model trained on historical data, anomalies detected with scores, typically 0.5-2% of points flagged as anomalies.
On failure: If too many anomalies (>5%), reduce contamination parameter or retrain on cleaner baseline period, if too few (<0.1%), increase contamination or check feature scaling, verify features have sufficient variance.
Use Facebook Prophet to model seasonality and detect deviations.
# aiops/prophet_detector.py
from prophet import Prophet
import pandas as pd
import numpy as np
from typing import Dict, Tuple
import logging
logger = logging.getLogger(__name__)
# ... (see EXAMPLES.md for complete implementation)
Expected: Prophet models capture daily/weekly seasonality, anomalies detected when actual values fall outside 99% confidence interval, forecasts generated for capacity planning.
On failure: If Prophet takes too long (>5 min per metric), reduce history to 30 days or disable weekly_seasonality, if too many false positives increase interval_width to 0.995, if missing seasonal patterns add custom seasonalities, ensure timezone consistency in timestamps.
Group related anomalies and identify potential root causes.
# aiops/alert_correlation.py
import pandas as pd
import numpy as np
from sklearn.cluster import DBSCAN
from typing import List, Dict
from datetime import timedelta
import networkx as nx
# ... (see EXAMPLES.md for complete implementation)
Expected: Related anomalies grouped into incidents, root causes identified based on dependency graph, incident summaries generated for investigation.
On failure: If all anomalies separate incidents, increase time_window_minutes, if root cause detection unclear define metric_relationships explicitly based on architecture, verify timestamp sorting is correct.
Send intelligent alerts with context and suppression of noise.
# aiops/intelligent_alerting.py
import requests
import logging
from typing import Dict, List
from datetime import datetime, timedelta
import json
logger = logging.getLogger(__name__)
# ... (see EXAMPLES.md for complete implementation)
Expected: High-severity incidents trigger PagerDuty pages, medium-severity go to Slack, low-severity logged only, duplicate alerts suppressed within 15-minute window.
On failure: Test webhook URLs with curl first, verify severity calculation produces reasonable values (0.5-0.9 range), check rate limiting doesn't suppress all alerts, ensure timezone handling is correct for last_alerts tracking.
Set up automated pipeline that runs periodically.
# aiops/monitoring_service.py
import schedule
import time
import logging
from datetime import datetime, timedelta
from data_loader import MetricsDataLoader
from isolation_forest_detector import IsolationForestDetector
from prophet_detector import ProphetAnomalyDetector
# ... (see EXAMPLES.md for complete implementation)
Expected: Service runs continuously, detects anomalies every 5 minutes, alerts sent for incidents, logs all activity.
On failure: Verify scheduler process stays alive (use systemd/supervisor for production), check Prometheus connectivity, ensure models are loaded successfully, implement dead man's switch alert if service stops running, monitor memory usage (reload models periodically if memory grows).
monitor-model-drift - Detect when anomaly detection models degrademonitor-data-integrity - Data quality checks before anomaly detectionsetup-prometheus-monitoring - Collect operational metricsforecast-operational-metrics - Capacity planning with Prophet forecasts