From thinking-frameworks-skills
Reviews scientific documents for logical clarity, argument soundness, and rigor, auditing hypothesis-data alignment, claim-evidence chains, quantitative precision, hedging, and terminology consistency.
npx claudepluginhub lyndonkl/claude --plugin thinking-frameworks-skillsThis skill uses the workspace's default tool permissions.
- [Core Principles](#core-principles)
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
1. Claims must match evidence: Every conclusion needs explicit support
2. Precision over vagueness: Quantify wherever possible
3. Hedging matches certainty: Strong claims need strong evidence
4. Logic must flow: Arguments should be traceable step by step
5. Terminology must be consistent: Same concept = same word
6. Mechanistic clarity: The "how" should be explained, not just "what"
Copy this checklist and track your progress:
Clarity Check Progress:
- [ ] Step 1: Identify core claims and hypotheses
- [ ] Step 2: Structural logic review (argument flow)
- [ ] Step 3: Claims-evidence audit
- [ ] Step 4: Quantitative precision check
- [ ] Step 5: Terminology consistency audit
- [ ] Step 6: Hedging calibration
- [ ] Step 7: Mechanistic clarity check
Step 1: Identify Core Claims
List all major claims, conclusions, and hypotheses in the document. These are what the author wants readers to believe after reading. Every claim needs to be evaluated. See resources/methodology.md for claim extraction.
Step 2: Structural Logic Review
Map the argument structure: What premises lead to what conclusions? Are all logical steps explicit? Are there gaps in the reasoning chain? See resources/methodology.md for logic mapping.
Step 3: Claims-Evidence Audit
For each claim: What evidence supports it? Is the evidence presented in this document or only cited? Does the evidence actually support the claim? Flag overclaiming. See resources/template.md for audit format.
Step 4: Quantitative Precision Check
Look for vague quantifiers ("some", "many", "significant increase"). Check for missing statistics, n values, confidence intervals. Flag qualitative descriptions that should be quantitative. See resources/template.md for checklist.
Step 5: Terminology Consistency Audit
Check that terms are used consistently throughout. Verify abbreviations are defined before use. Ensure technical terms are appropriate for audience. See resources/methodology.md for audit process.
Step 6: Hedging Calibration
Match hedge strength to evidence strength. "Demonstrates" needs strong evidence; "suggests" allows weaker evidence. Flag overclaiming (strong words, weak evidence) and underclaiming (weak words, strong evidence). See resources/methodology.md for calibration.
Step 7: Mechanistic Clarity Check
Where explanations of "how" are needed, are they provided? Are mechanisms speculative or evidence-based? Is the level of mechanistic detail appropriate? Validate using resources/evaluators/rubric_clarity.json. Minimum standard: Average score ≥ 3.5.
For each major claim, trace the chain:
CLAIM: [What the author asserts]
↓
EVIDENCE TYPE: [Data/Citation/Logic/Authority]
↓
EVIDENCE: [What supports this claim]
↓
EVALUATION: [Strong/Moderate/Weak/Missing]
↓
ISSUES: [If any - overclaiming, logical gap, etc.]
Map argument structure:
PREMISE 1: [Starting assumption or fact]
+
PREMISE 2: [Additional assumption or fact]
↓
INFERENCE: [Logical step taken]
↓
CONCLUSION: [What follows from inference]
↓
VALIDITY CHECK: [Does conclusion follow from premises?]
Common logical issues:
| Type | Vague (Fix) | Precise (Good) |
|---|---|---|
| Magnitude | "Large increase" | "3.5-fold increase" |
| Frequency | "Often occurs" | "Occurs in 75% of cases" |
| Comparison | "Higher than control" | "2.1x higher (p<0.01)" |
| Sample | "Multiple experiments" | "n=6 biological replicates" |
| Time | "Extended period" | "14-day treatment" |
| Concentration | "High concentration" | "10 µM" |
| Evidence Level | Appropriate Hedge Words |
|---|---|
| Direct, replicated, mechanistic | demonstrates, establishes, proves |
| Strong indirect or correlational | shows, indicates, reveals |
| Moderate, single study | suggests, supports, is consistent with |
| Limited or preliminary | may, might, could, appears to |
| Speculation beyond data | conceivably, potentially, we speculate |
Pattern: Strong conclusion words with weak evidence
Examples:
Fix: Match hedge strength to evidence or add qualifying statements
Pattern: Conclusion requires unstated premise
Examples:
Fix: Make implicit premises explicit or acknowledge limitations
Pattern: Qualitative language where numbers exist
Examples:
Fix: Replace with specific numbers
Pattern: Same concept, different words (or vice versa)
Examples:
Fix: Standardize terminology; create consistency table
Pattern: "What" without "how"
Examples:
Fix: Add mechanistic explanation or acknowledge it's unknown
Key requirements:
What this skill does NOT do:
Focus areas:
Key resources:
Quick checks:
Red flags to look for:
Time estimates:
Inputs required:
Outputs produced: