Use when need explicit quality criteria and scoring scales to evaluate work consistently, compare alternatives objectively, set acceptance thresholds, reduce subjective bias, or when user mentions rubric, scoring criteria, quality standards, evaluation framework, inter-rater reliability, or grade/assess work.
Creates structured evaluation rubrics with explicit criteria and scoring scales to assess work consistently and objectively. Use when evaluating code, designs, or documents to reduce bias, align teams on quality standards, or when users mention "rubric," "scoring criteria," or "how do we grade this.
/plugin marketplace add lyndonkl/claude/plugin install lyndonkl-thinking-frameworks-skills@lyndonkl/claudeThis skill inherits all available tools. When active, it can use any tool Claude has access to.
resources/evaluators/rubric_evaluation_rubrics.jsonresources/methodology.mdresources/template.mdEvaluation Rubrics provide explicit criteria and performance scales to assess quality consistently, fairly, and transparently. This skill guides you through rubric design—from identifying meaningful criteria to writing clear performance descriptors—to enable objective evaluation, reduce bias, align teams on standards, and give actionable feedback.
Use this skill when:
Trigger phrases: "rubric", "scoring criteria", "evaluation framework", "quality standards", "how do we grade this", "what does good look like", "consistent assessment", "inter-rater reliability"
An evaluation rubric is a structured scoring tool with:
Core benefits:
Quick example:
Scenario: Evaluating technical blog posts
Rubric (1-5 scale):
| Criterion | 1 (Poor) | 3 (Adequate) | 5 (Excellent) |
|---|---|---|---|
| Technical Accuracy | Multiple factual errors, misleading | Mostly correct, minor inaccuracies | Fully accurate, technically rigorous |
| Clarity | Confusing, jargon-heavy, poor structure | Clear to experts, some structure | Accessible to target audience, well-organized |
| Practical Value | No actionable guidance, theoretical only | Some examples, limited applicability | Concrete examples, immediately applicable |
| Originality | Rehashes common knowledge, no new insight | Some fresh perspective, builds on existing | Novel approach, advances understanding |
Scoring: Post A scores [4, 5, 3, 2] = 3.5 avg. Post B scores [5, 4, 5, 4] = 4.5 avg → Post B higher quality.
Feedback for Post A: "Strong clarity (5) and good accuracy (4), but needs more practical examples (3) and offers less original insight (2). Add code samples and explore edge cases to improve."
Copy this checklist and track your progress:
Rubric Development Progress:
- [ ] Step 1: Define purpose and scope
- [ ] Step 2: Identify evaluation criteria
- [ ] Step 3: Design the scale
- [ ] Step 4: Write performance descriptors
- [ ] Step 5: Test and calibrate
- [ ] Step 6: Use and iterate
Step 1: Define purpose and scope
Clarify what you're evaluating, who evaluates, who uses results, what decisions depend on scores. See resources/template.md for scoping questions.
Step 2: Identify evaluation criteria
Brainstorm quality dimensions, prioritize most important/observable, balance coverage vs. simplicity (4-8 criteria typical). See resources/template.md for brainstorming framework.
Step 3: Design the scale
Choose number of levels (1-5, 1-4, 1-10), scale type (numeric, qualitative), anchors (what does each level mean?). See resources/methodology.md for scale selection guidance.
Step 4: Write performance descriptors
For each criterion × level, write observable description of what that performance looks like. See resources/template.md for writing guidelines.
Step 5: Test and calibrate
Have multiple reviewers score sample work, compare scores, discuss discrepancies, refine rubric. See resources/methodology.md for inter-rater reliability testing.
Step 6: Use and iterate
Apply rubric, collect feedback from evaluators and evaluatees, revise criteria/descriptors as needed. Validate using resources/evaluators/rubric_evaluation_rubrics.json. Minimum standard: Average score ≥ 3.5.
Pattern 1: Analytic Rubric (Most Common)
Pattern 2: Holistic Rubric
Pattern 3: Single-Point Rubric
Pattern 4: Checklist (Binary)
Pattern 5: Standards-Based Rubric
Critical requirements:
Criteria must be observable and measurable: Not "good attitude" (subjective), but "arrives on time, volunteers for tasks, helps teammates" (observable). Vague criteria lead to unreliable scoring. Test: Can two independent reviewers score this criterion consistently?
Descriptors must distinguish levels clearly: Each level should have concrete differences from adjacent levels (not just "better" or "more"). Avoid: "5=very good, 4=good, 3=okay". Better: "5=zero bugs, meets all requirements, 4=1-2 minor bugs, meets 90% requirements, 3=3+ bugs or missing key feature".
Use appropriate scale granularity: 1-3 too coarse (hard to differentiate), 1-10 too fine (false precision, hard to define all levels). Sweet spot: 1-4 (forced choice, no middle) or 1-5 (allows neutral middle). Match granularity to actual observable differences.
Balance comprehensiveness with simplicity: More criteria = more detailed feedback but longer to use. Aim for 4-8 criteria covering essential quality dimensions. If >10 criteria, consider grouping or prioritizing.
Calibrate for inter-rater reliability: Have multiple reviewers score same work, measure agreement (Kappa, ICC). If <70% agreement, refine descriptors. Schedule calibration sessions where reviewers discuss discrepancies.
Provide examples at each level: Abstract descriptors are ambiguous. Include concrete examples of work at each level (anchor papers, reference designs, code samples) to calibrate reviewers.
Make rubric accessible before evaluation: If evaluatees see rubric only after being scored, it's just grading not guidance. Share rubric upfront so people know expectations and can self-assess.
Weight criteria appropriately: Not all criteria equally important. If "Security" matters more than "Code style", weight it (Security ×3, Style ×1). Or use thresholds (must score ≥4 on Security to pass, regardless of other scores).
Common pitfalls:
Key resources:
Scale Selection Guide:
| Scale | Use When | Pros | Cons |
|---|---|---|---|
| 1-3 | Need quick categorization, clear tiers | Fast, forces clear decision | Too coarse, less feedback |
| 1-4 | Want forced choice (no middle) | Avoids central tendency, clear differentiation | No neutral option, feels binary |
| 1-5 | General purpose, most common | Allows neutral, familiar, good granularity | Central tendency bias (everyone gets 3) |
| 1-10 | Need fine gradations, large sample | Maximum differentiation, statistical analysis | False precision, hard to distinguish adjacent levels |
| Qualitative (Novice/Proficient/Expert) | Educational, skill development | Intuitive, growth-oriented | Less quantitative, harder to aggregate |
| Binary (Yes/No, Pass/Fail) | Compliance, gatekeeping | Objective, simple | No gradations, misses quality differences |
Criteria Types:
Inter-Rater Reliability Benchmarks:
Typical Rubric Development Time:
When to escalate beyond rubrics:
Inputs required:
Outputs produced:
evaluation-rubrics.md: Purpose, criteria definitions, scale with descriptors, usage instructions, weighting/thresholds, calibration notesCreating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.
Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand colors or style guidelines, visual formatting, or company design standards apply.
Create beautiful visual art in .png and .pdf documents using design philosophy. You should use this skill when the user asks to create a poster, piece of art, design, or other static piece. Create original visual designs, never copying existing artists' work to avoid copyright violations.