From agent-almanac
Designs and executes A/B tests for production ML models using traffic splitting, statistical significance testing, and canary/shadow deployments. Use for validating new versions, comparing candidates, or measuring business impacts before rollout.
npx claudepluginhub pjt222/agent-almanacThis skill is limited to using the following tools:
> See [Extended Examples](references/EXAMPLES.md) for complete configuration files and templates.
Creates A/B test configurations for ML deployment operations including model serving, MLOps pipelines, monitoring, and production optimization. Useful when writing or running ML tests.
Guides MLOps workflows for ML model deployment: readiness checklists, serving infrastructure (FastAPI, SageMaker, Triton), inference optimization, versioning, A/B testing, drift detection, retraining, and monitoring.
Guides A/B test setup with mandatory gates for hypothesis validation, metrics definition, sample size calculation, and execution readiness checks.
Share bugs, ideas, or general feedback.
See Extended Examples for complete configuration files and templates.
Execute controlled experiments comparing model versions using traffic splitting and statistical analysis.
Define test parameters, success criteria, and statistical requirements.
# ab_test/experiment_config.py
from dataclasses import dataclass
from typing import List, Dict
import numpy as np
from scipy.stats import norm
@dataclass
# ... (see EXAMPLES.md for complete implementation)
Expected: Experiment configuration with statistically sound sample size calculation, typically 5-10k samples per variant for 5-10% MDE.
On failure: If required sample size too large, increase traffic allocation, extend test duration, or accept larger MDE; verify baseline metric estimate is accurate; consider sequential testing for continuous monitoring.
Set up routing logic to randomly assign requests to models.
# ab_test/traffic_router.py
import hashlib
import random
from typing import Dict, Optional
from dataclasses import dataclass
import logging
logger = logging.getLogger(__name__)
# ... (see EXAMPLES.md for complete implementation)
Expected: Consistent user-to-variant assignment, accurate traffic split matching configured percentages, all assignments logged for analysis.
On failure: Verify hash function produces uniform distribution (test with 10k user IDs), check that user_id is stable across requests (not session_id), ensure logs capture all prediction events, validate traffic split in first 1000 requests.
Run challenger model in parallel without affecting users (shadow mode).
# ab_test/shadow_deployment.py
import asyncio
from typing import Dict, Any
import logging
from concurrent.futures import ThreadPoolExecutor
import time
logger = logging.getLogger(__name__)
# ... (see EXAMPLES.md for complete implementation)
Expected: Champion predictions served with normal latency, challenger predictions logged asynchronously without blocking, prediction differences captured for analysis.
On failure: Set challenger timeout < champion SLA to avoid blocking, handle challenger errors gracefully without affecting champion, monitor memory usage (two models loaded), consider sampling (log only 10% of shadow predictions).
Gather experiment data and perform statistical tests.
# ab_test/analysis.py
import pandas as pd
import numpy as np
from scipy import stats
from typing import Dict, Tuple
import logging
logger = logging.getLogger(__name__)
# ... (see EXAMPLES.md for complete implementation)
Expected: Statistical test results with p-values, confidence intervals, and clear decision (rollout/keep/inconclusive), typically after 7-14 days or reaching sample size.
On failure: Verify ground truth labels are available (may need delayed analysis), check for sample ratio mismatch (SRM) indicating assignment bugs, ensure sufficient sample size reached, look for novelty/primacy effects in early data, consider sequential testing if fixed-horizon test is too slow.
Continuously check that challenger doesn't violate safety thresholds.
# ab_test/guardrails.py
import pandas as pd
import logging
from typing import Dict, List
logger = logging.getLogger(__name__)
# ... (see EXAMPLES.md for complete implementation)
Expected: Guardrail violations detected within 5-15 minutes, automated experiment stop if critical thresholds breached (latency, errors), alerts sent to team.
On failure: Verify guardrail thresholds are realistic (not too tight), ensure monitoring loop is running continuously, check that stop_experiment() function actually updates routing, test alert delivery channels.
Based on experiment results, decide whether to rollout challenger.
# ab_test/rollout_decision.py
import logging
from typing import Dict
from dataclasses import dataclass
logger = logging.getLogger(__name__)
# ... (see EXAMPLES.md for complete implementation)
Expected: Clear decision (full/gradual rollout, keep champion, or extend test) with justification and action items.
On failure: If decision unclear, perform subgroup analysis (by user segment, time of day, device type), check for interaction effects, review business context (e.g., is 2% lift worth engineering cost?), consult with stakeholders.
deploy-ml-model-serving - Model deployment infrastructure and versioningmonitor-model-drift - Ongoing performance monitoring post-rollout