Use when reviewing any scientific document for logical clarity, argument soundness, and scientific rigor. Invoke when user mentions check clarity, review logic, scientific soundness, hypothesis-data alignment, claims vs evidence, or needs a cross-cutting scientific logic review independent of document type.
Systematically reviews scientific documents for logical clarity, argument soundness, and rigor. Triggers when users request clarity checks, logic reviews, or verification that claims match evidence.
/plugin marketplace add lyndonkl/claude/plugin install lyndonkl-thinking-frameworks-skills@lyndonkl/claudeThis skill inherits all available tools. When active, it can use any tool Claude has access to.
resources/evaluators/rubric_clarity.jsonresources/methodology.mdresources/template.mdThis skill provides systematic review of scientific clarity and logical rigor across any document type. It focuses on hypothesis-data alignment, argument validity, quantitative precision, and appropriate hedging. Use this as a cross-cutting check that complements document-specific skills.
Use this skill when:
Trigger phrases: "check scientific clarity", "review the logic", "do claims match data", "scientific rigor check", "hypothesis-data alignment", "is this sound"
Works with all document types:
1. Claims must match evidence: Every conclusion needs explicit support
2. Precision over vagueness: Quantify wherever possible
3. Hedging matches certainty: Strong claims need strong evidence
4. Logic must flow: Arguments should be traceable step by step
5. Terminology must be consistent: Same concept = same word
6. Mechanistic clarity: The "how" should be explained, not just "what"
Copy this checklist and track your progress:
Clarity Check Progress:
- [ ] Step 1: Identify core claims and hypotheses
- [ ] Step 2: Structural logic review (argument flow)
- [ ] Step 3: Claims-evidence audit
- [ ] Step 4: Quantitative precision check
- [ ] Step 5: Terminology consistency audit
- [ ] Step 6: Hedging calibration
- [ ] Step 7: Mechanistic clarity check
Step 1: Identify Core Claims
List all major claims, conclusions, and hypotheses in the document. These are what the author wants readers to believe after reading. Every claim needs to be evaluated. See resources/methodology.md for claim extraction.
Step 2: Structural Logic Review
Map the argument structure: What premises lead to what conclusions? Are all logical steps explicit? Are there gaps in the reasoning chain? See resources/methodology.md for logic mapping.
Step 3: Claims-Evidence Audit
For each claim: What evidence supports it? Is the evidence presented in this document or only cited? Does the evidence actually support the claim? Flag overclaiming. See resources/template.md for audit format.
Step 4: Quantitative Precision Check
Look for vague quantifiers ("some", "many", "significant increase"). Check for missing statistics, n values, confidence intervals. Flag qualitative descriptions that should be quantitative. See resources/template.md for checklist.
Step 5: Terminology Consistency Audit
Check that terms are used consistently throughout. Verify abbreviations are defined before use. Ensure technical terms are appropriate for audience. See resources/methodology.md for audit process.
Step 6: Hedging Calibration
Match hedge strength to evidence strength. "Demonstrates" needs strong evidence; "suggests" allows weaker evidence. Flag overclaiming (strong words, weak evidence) and underclaiming (weak words, strong evidence). See resources/methodology.md for calibration.
Step 7: Mechanistic Clarity Check
Where explanations of "how" are needed, are they provided? Are mechanisms speculative or evidence-based? Is the level of mechanistic detail appropriate? Validate using resources/evaluators/rubric_clarity.json. Minimum standard: Average score ≥ 3.5.
For each major claim, trace the chain:
CLAIM: [What the author asserts]
↓
EVIDENCE TYPE: [Data/Citation/Logic/Authority]
↓
EVIDENCE: [What supports this claim]
↓
EVALUATION: [Strong/Moderate/Weak/Missing]
↓
ISSUES: [If any - overclaiming, logical gap, etc.]
Map argument structure:
PREMISE 1: [Starting assumption or fact]
+
PREMISE 2: [Additional assumption or fact]
↓
INFERENCE: [Logical step taken]
↓
CONCLUSION: [What follows from inference]
↓
VALIDITY CHECK: [Does conclusion follow from premises?]
Common logical issues:
| Type | Vague (Fix) | Precise (Good) |
|---|---|---|
| Magnitude | "Large increase" | "3.5-fold increase" |
| Frequency | "Often occurs" | "Occurs in 75% of cases" |
| Comparison | "Higher than control" | "2.1x higher (p<0.01)" |
| Sample | "Multiple experiments" | "n=6 biological replicates" |
| Time | "Extended period" | "14-day treatment" |
| Concentration | "High concentration" | "10 µM" |
| Evidence Level | Appropriate Hedge Words |
|---|---|
| Direct, replicated, mechanistic | demonstrates, establishes, proves |
| Strong indirect or correlational | shows, indicates, reveals |
| Moderate, single study | suggests, supports, is consistent with |
| Limited or preliminary | may, might, could, appears to |
| Speculation beyond data | conceivably, potentially, we speculate |
Pattern: Strong conclusion words with weak evidence
Examples:
Fix: Match hedge strength to evidence or add qualifying statements
Pattern: Conclusion requires unstated premise
Examples:
Fix: Make implicit premises explicit or acknowledge limitations
Pattern: Qualitative language where numbers exist
Examples:
Fix: Replace with specific numbers
Pattern: Same concept, different words (or vice versa)
Examples:
Fix: Standardize terminology; create consistency table
Pattern: "What" without "how"
Examples:
Fix: Add mechanistic explanation or acknowledge it's unknown
Critical requirements:
What this skill does NOT do:
Focus areas:
Key resources:
Quick checks:
Red flags to look for:
Time estimates:
Inputs required:
Outputs produced:
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.
Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand colors or style guidelines, visual formatting, or company design standards apply.
Create beautiful visual art in .png and .pdf documents using design philosophy. You should use this skill when the user asks to create a poster, piece of art, design, or other static piece. Create original visual designs, never copying existing artists' work to avoid copyright violations.