Establishes measurement programs, analyzes metrics with statistical process control, sets baselines, and implements CMMI Level 4 quantitative management to prevent vanity metrics.
npx claudepluginhub tachyon-beep/skillpacks --plugin axiom-sdlc-engineeringThis skill uses the workspace's default tool permissions.
Implements CMMI **MA** (Measurement & Analysis), **QPM** (Quantitative Project Management), and **OPP** (Organizational Process Performance) through data-driven decision making, statistical process control, and predictive analytics.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Implements CMMI MA (Measurement & Analysis), QPM (Quantitative Project Management), and OPP (Organizational Process Performance) through data-driven decision making, statistical process control, and predictive analytics.
Core Principle: Measure what matters, use data to drive decisions, distinguish signal from noise.
Avoid: Measurement theater (tracking without action), vanity metrics (looks good but no value), gaming metrics (optimizing numbers vs processes).
Triggers:
Use this when:
Do NOT use for:
platform-integration skillgovernance-and-risk skill| You Want To... | Reference Sheet | Key Content |
|---|---|---|
| Plan what to measure | measurement-planning.md | GQM methodology, cost/value analysis, anti-patterns |
| Choose metrics | key-metrics-by-domain.md | Quality, velocity, stability, deployment metrics |
| Implement DORA | dora-metrics.md | 4 DORA metrics, collection automation, baselines |
| Analyze variation | statistical-analysis.md | Control charts, SPC, trend detection |
| Establish baselines | process-baselines.md | Historical data analysis, baseline maintenance |
| Make data-driven decisions | quantitative-management.md | QPM, prediction models, Level 4 practices |
| Scale L2→L3→L4 | level-scaling.md | Maturity progression, requirements by level |
Focus: Capture data for visibility
Measurements:
Tools: Spreadsheets, basic dashboards, manual collection acceptable
Example: Track number of bugs found per sprint, average time to close bugs
Limitation: No statistical analysis, no baselines, reactive only
Focus: Establish norms, detect trends
Measurements:
Tools: Analytics platforms, automated collection, historical databases
Example: Know that your organization typically delivers 30 story points/sprint ± 10, and current project is trending 25 (slightly below average)
Advancement over L2: Baselines provide context, trends show direction
Focus: Predict outcomes, control variation
Measurements:
Tools: Statistical packages (R, Python), control chart software, Monte Carlo simulation
Example: Use control charts to detect when defect rate exceeds upper control limit (special cause), trigger root cause analysis. Use regression model to predict project completion date with 90% confidence interval.
Advancement over L3: Predictive, not just reactive; distinguishes noise from signal
| Anti-Pattern | Symptom | Better Approach |
|---|---|---|
| Measurement Theater | Tracking many metrics, no action taken | Use GQM to link metrics to decisions |
| Vanity Metrics | Numbers look impressive but don't drive improvement | Focus on actionable metrics with clear business value |
| Gaming Metrics | Teams optimize numbers instead of processes | Measure outcomes, not activities; use multiple metrics |
| Lagging-Only | Only measure results after the fact | Balance lagging with leading indicators |
| Analysis Paralysis | Too many metrics, can't make decisions | Focus on critical few (3-5 key metrics) |
| Flying Blind | No metrics, decisions based on opinion | Start with Level 2 basics, build up |
| Over-Precision | Measuring to 3 decimal places when ±20% is noise | Match precision to decision granularity |
REQM (Requirements Management):
CM (Configuration Management):
VER/VAL (Verification/Validation):
RSKM (Risk Management):
DAR (Decision Analysis & Resolution):
Cross-references:
requirements-lifecycle - Metrics for requirement qualitydesign-and-build - Metrics for development processgovernance-and-risk - Risk and compliance metricsplatform-integration - Metric collection automationGoal: Start tracking basic metrics for visibility
./measurement-planning.md - Learn GQM methodologyExample metrics for first program:
Goal: Industry-standard DevOps metrics
./dora-metrics.md - Understand the 4 metrics./measurement-planning.md - Ensure DORA aligns with goalsGoal: Detect process instability automatically
./statistical-analysis.md - Learn control chartsExample: Defect escape rate control chart with UCL=15%, LCL=2%, current value 18% → investigate
Goal: Use quantitative data for project decisions
./quantitative-management.md - Learn QPM practices./process-baselines.md - Understand baseline usageQ: How many metrics should I track? A: Level 2: 3-5 basic metrics. Level 3: 5-10 organizational baselines. Level 4: 3-5 with statistical control. Focus beats breadth.
Q: How long to establish a baseline? A: Minimum 4 weeks for initial baseline. Prefer 12 weeks (1 quarter) for statistical validity. Update quarterly or when process changes.
Q: What's the difference between MA and QPM? A: MA (Level 2+): Measurement & Analysis - defining, collecting, analyzing metrics. QPM (Level 4): Using statistical process control and prediction models to manage projects quantitatively.
Q: Do I need Level 3 before Level 4? A: Yes. Level 4 requires organizational baselines (Level 3). Can't do statistical process control without knowing what "normal" looks like.
Q: Leading vs lagging indicators - what's the difference?
A: Lagging: Measure results after the fact (defects found, deployment time). Leading: Predict future outcomes (code review coverage, test coverage). You need both. See ./key-metrics-by-domain.md for examples.
Q: How do I avoid measurement theater?
A: Use GQM methodology (see ./measurement-planning.md). Every metric must answer a question that drives a decision. If you can't explain the decision, don't track the metric.
Q: When should I use control charts?
A: When you have 20+ data points and want to distinguish normal variation from abnormal (special cause). See ./statistical-analysis.md for guidance.
../requirements-lifecycle/SKILL.md - Requirements management metrics../design-and-build/SKILL.md - Development process metrics../governance-and-risk/SKILL.md - Risk and compliance metrics../platform-integration/SKILL.md - Metric collection automation in GitHub/Azure DevOpsLast Updated: 2026-01-25