From academic-writing
Simulate peer review of a paper. Use when you want to anticipate reviewer concerns, identify weaknesses, or get a simulated review with scores before submission. Trigger whenever the user asks to review a paper, check for weaknesses, simulate what reviewers would say, do a pre-submission check, stress-test a draft, or get feedback on a manuscript — even informally like "what would reviewers think of this" or "is this ready to submit."
npx claudepluginhub jasonbian97/jason-cc-skills --plugin academic-writingThis skill uses the workspace's default tool permissions.
Produce a structured simulated review that anticipates real reviewer concerns, identifies weaknesses, and provides actionable improvement suggestions.
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Analyzes competition with Porter's Five Forces, Blue Ocean Strategy, and positioning maps to identify differentiation opportunities and market positioning for startups and pitches.
Produce a structured simulated review that anticipates real reviewer concerns, identifies weaknesses, and provides actionable improvement suggestions.
Before reviewing, read references/reviewer-guidelines.md for reviewer evaluation criteria and guidelines.
Choose the appropriate review mode based on what the user needs:
Goal: Help strengthen the paper before submission. Balanced, constructive, identifies fixable weaknesses.
Use when the user says things like "review my paper," "is this ready to submit," "what should I improve."
Goal: Simulate the harshest plausible reviewer. Find every weakness, question every assumption, demand more evidence.
Use when the user explicitly wants to stress-test (e.g., "what's the worst a reviewer could say," "tear this apart," "find every weakness").
The review format is the same for both modes — the difference is in how aggressively you probe for weaknesses and how high you set the bar.
Read the paper end-to-end before forming judgments. Note first impressions but reserve scoring until after careful analysis.
| Dimension | What to Evaluate |
|---|---|
| Quality | Technical soundness, well-supported claims, correct methodology, reproducibility |
| Clarity | Clear writing, logical flow, reproducible by experts in the field |
| Significance | Community impact, advances understanding, practical or theoretical importance |
| Originality | New insights, methods, or perspectives (doesn't require entirely new method) |
Systematically check for:
| Weakness Type | What to Look For |
|---|---|
| Overclaiming | Claims not supported by evidence; gap between abstract/intro claims and actual results |
| Weak baselines | Missing obvious comparisons; outdated baselines; unfair comparison setup |
| Unclear contributions | Can't identify what's novel; contribution buried or vague |
| Missing ablations | No analysis of which components matter; can't tell what drives performance |
| Reproducibility concerns | Missing hyperparameters, code, dataset details, or compute information |
| Writing clarity | Hard to follow; inconsistent terminology; missing signposting |
| Insufficient evaluation | Too few datasets; no error bars; no statistical significance testing |
| Missing limitations | No discussion of when the method fails or its scope |
Even in pre-submission mode, consider what a skeptical reviewer would challenge:
Simulated reviews are only useful if they're calibrated to real reviewer behavior. Common pitfalls:
| Pitfall | Why It's Bad | Correction |
|---|---|---|
| Too generous | Authors submit overconfident, get blindsided | Be honest about weaknesses — that's the point of simulation |
| Too harsh | Authors lose confidence in good work | Distinguish major weaknesses (would change recommendation) from minor issues (should fix but not deal-breakers) |
| Focusing on minor issues | Misses the forest for the trees | Lead with the 2-3 things that would most influence a real reviewer's score |
| Ignoring the positive | Real reviews acknowledge strengths | Always identify genuine strengths — this helps authors know what's working |
Rule of thumb: If you can't identify at least 2 genuine strengths, either the paper has fundamental problems or you haven't read it carefully enough.
## Summary of Contributions
[1-2 sentences: what the paper claims to contribute, as the reviewer understands it]
## Strengths
- [Strength 1]
- [Strength 2]
- [Strength 3]
(2-4 bullet points, specific and evidence-based)
## Weaknesses
- [Weakness 1 — severity: major/minor]
- [Weakness 2 — severity: major/minor]
- [Weakness 3 — severity: major/minor]
(2-4 bullet points, ranked by severity, with specific suggestions for how to address each)
## Questions for Authors
- [Question 1]
- [Question 2]
(Questions whose answers could change the assessment)
## Overall Assessment
Score: [X/10]
Confidence: [X/5] (1 = not confident, 5 = very confident)
Justification: [2-3 sentences explaining the score]
## Actionable Suggestions
- [Specific improvement 1]
- [Specific improvement 2]
(Concrete steps to address weaknesses before submission)
Use a 1-10 scale unless the user specifies a different scale:
| Score | Label | Meaning |
|---|---|---|
| 9-10 | Strong Accept | Exceptional work; among the best you'd expect at a top venue |
| 7-8 | Accept | Solid contribution with minor issues; would strengthen the venue |
| 5-6 | Borderline | Has merit but significant weaknesses; could go either way |
| 3-4 | Below threshold | Fundamental issues with claims, methodology, or evaluation |
| 1-2 | Strong Reject | Critical flaws, known results, or ethical concerns |
Confidence scale:
| Score | Meaning |
|---|---|
| 5 | Very confident — deep expertise in this exact area |
| 4 | Confident — familiar with the field and related work |
| 3 | Somewhat confident — know the area but not the specific subfield |
| 2 | Low confidence — outside main expertise |
| 1 | Guessing — very unfamiliar with the topic |
Use this as a quick scan before writing the review:
references/reviewer-guidelines.md for evaluation guidelines and common concerns