Generic weighted scoring and threshold-based decision framework for evaluating artifacts against configurable criteria. Triggers: evaluation, scoring, quality gates, decision framework, rubrics, weighted criteria, threshold decisions, artifact evaluation Use when: implementing evaluation systems, creating quality gates, designing scoring rubrics, building decision frameworks DO NOT use when: simple pass/fail without scoring needs. Consult this skill when building evaluation or scoring systems.
/plugin marketplace add athola/claude-night-market/plugin install leyline@claude-night-marketThis skill inherits all available tools. When active, it can use any tool Claude has access to.
INTEGRATION.mdREADME.mdmodules/decision-thresholds.mdmodules/scoring-patterns.mdA generic framework for weighted scoring and threshold-based decision making. Provides reusable patterns for evaluating any artifact against configurable criteria with consistent scoring methodology.
This framework abstracts the common pattern of: define criteria → assign weights → score against criteria → apply thresholds → make decisions.
criteria:
- name: criterion_name
weight: 0.30 # 30% of total score
description: What this measures
scoring_guide:
90-100: Exceptional
70-89: Strong
50-69: Acceptable
30-49: Weak
0-29: Poor
scores = {
"criterion_1": 85, # Out of 100
"criterion_2": 92,
"criterion_3": 78,
}
total = sum(score * weights[criterion] for criterion, score in scores.items())
# Example: (85 × 0.30) + (92 × 0.40) + (78 × 0.30) = 85.5
thresholds:
80-100: Accept with priority
60-79: Accept with conditions
40-59: Review required
20-39: Reject with feedback
0-19: Reject
criteria:
correctness: {weight: 0.40, description: Does code work as intended?}
maintainability: {weight: 0.25, description: Is it readable?}
performance: {weight: 0.20, description: Meets performance needs?}
testing: {weight: 0.15, description: Tests comprehensive?}
thresholds:
85-100: Approve immediately
70-84: Approve with minor feedback
50-69: Request changes
0-49: Reject, major issues
1. Review artifact against each criterion
2. Assign 0-100 score for each criterion
3. Calculate: total = Σ(score × weight)
4. Compare total to thresholds
5. Take action based on threshold range
Quality Gates: Code review, PR approval, release readiness Content Evaluation: Document quality, knowledge intake, skill assessment Resource Allocation: Backlog prioritization, investment decisions, triage
# In your skill's frontmatter
dependencies: [leyline:evaluation-framework]
Then customize the framework for your domain:
modules/scoring-patterns.md for detailed methodologymodules/decision-thresholds.md for threshold design