From thinking-frameworks-skills
Guides systematic multi-pass review and editing of scientific manuscripts (research articles, reviews, perspectives) to improve clarity, structure, scientific rigor, and reader comprehension. Useful for paper drafts, journal submissions, or reviewer feedback.
npx claudepluginhub lyndonkl/claude --plugin thinking-frameworks-skillsThis skill uses the workspace's default tool permissions.
- [Core Principles](#core-principles)
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Related skills: Grant proposals → grant-proposal-assistant | Recommendation letters → academic-letter-architect | Emails → scientific-email-polishing
Seven foundational beliefs guiding manuscript review:
Copy this checklist and track your progress:
Manuscript Review Progress:
- [ ] Step 1: Identify manuscript type and extract core message
- [ ] Step 2: Structural pass - map and evaluate overall organization
- [ ] Step 3: Introduction review - gap statement, focus, hypothesis
- [ ] Step 4: Results review - question, approach, finding, interpretation
- [ ] Step 5: Discussion review - synthesis, context, limitations
- [ ] Step 6: Scientific clarity check - claims, controls, hedging
- [ ] Step 7: Language polish - terminology, voice, jargon
- [ ] Step 8: Formatting check - journal compliance
Step 1: Identify Manuscript Type and Core Message
Determine document type (research article, review, perspective, short communication). Extract the ONE finding or message readers must remember. Ask: "If readers remember only one thing, what should it be?" See resources/methodology.md for extraction techniques.
Step 2: Structural Pass
Map overall organization against standard IMRaD (Introduction, Methods, Results, Discussion) or review structure. Check logical sequencing - does each section flow into the next? Identify unclear transitions or missing context. See resources/methodology.md for structure evaluation.
Step 3: Introduction Review
Evaluate using the Introduction Arc: Broad context → Narrow focus → Knowledge gap → Hypothesis/Objective. Check that gap statement is explicit and compelling. Verify ending with clear hypothesis or objective. See resources/template.md for template.
Step 4: Results Review
For each figure/table/experiment: Question addressed? → Approach used? → Key finding (with statistics)? → Interpretation (what it means)? Flag data-dump writing that lacks interpretation. Ensure findings build toward core message. See resources/template.md for results structure.
Step 5: Discussion Review
Verify structure: Revisit hypothesis → Interpret findings in field context → Place in broader literature → Acknowledge limitations → Suggest future directions. Check for overclaiming (speculation presented as fact). Ensure clear separation of data interpretation vs. speculation. See resources/methodology.md for discussion framework.
Step 6: Scientific Clarity Check
Run the clarity checklist: Claims supported by data? Quantitative details present (statistics, n values)? Controls adequately described? Interpretations appropriately hedged? Mechanistic explanations where needed? See resources/template.md for full checklist.
Step 7: Language Polish
Ensure terminology consistency throughout. Remove or define jargon on first use. Prefer active voice when it aids clarity. Standardize abbreviations. Check for hedging language ("suggests" vs "proves"). See resources/methodology.md for specific guidance.
Step 8: Formatting Check
Verify compliance with target journal guidelines (word limits, reference format, figure requirements). Check section headings match journal requirements. Ensure abstract follows structured/unstructured requirement. Validate using resources/evaluators/rubric_scientific_manuscript.json. Minimum standard: Average score ≥ 3.5.
Goal: Convince readers the problem matters and your approach is sound
The Funnel Structure:
[Broad context - establish field importance, 1-2 sentences]
↓
[Narrow to specific area - what's been done]
↓
[Knowledge gap - what's missing, why it matters]
↓
[Your hypothesis/objective - what you will address]
Common problems:
Goal: Present data clearly with interpretation, not just numbers
Per-paragraph/figure structure:
[Question this experiment addresses]
[Approach/method used]
[Key finding - with quantification]
[Brief interpretation - what this means]
Common problems:
Goal: Interpret findings and place in broader context
Standard flow:
[Restate main finding and hypothesis status]
↓
[Interpret key results in field context]
↓
[Compare to prior literature - agreements/disagreements]
↓
[Mechanistic implications (if applicable)]
↓
[Limitations - honest acknowledgment]
↓
[Future directions - what comes next]
↓
[Concluding statement - big picture significance]
Common problems:
Active vs. Passive Voice:
Hedging Language:
Jargon Management:
Terminology Consistency:
Key requirements:
Preserve author voice: Edit for clarity, not voice. Avoid inventing claims or changing meaning. Mark suggestions clearly when proposing new content.
Claims match data: Every conclusion must be supported by presented results. Flag overclaiming immediately. Speculation must be labeled.
Quantitative rigor: Statistics required for comparisons. N values for all experiments. Significance thresholds stated. Variability measures included.
Logical flow: Each section should flow naturally to the next. Transitions explicit. Conclusions follow from premises.
Appropriate hedging: Strong claims need strong evidence. Use hedging language proportional to certainty.
Consistent terminology: Same concept = same term throughout. Abbreviations defined before use.
Common pitfalls:
Key resources:
Introduction checklist:
Results checklist:
Discussion checklist:
Typical review time:
Inputs required:
Outputs produced: