This skill should be used when the user asks to "review a grant", "review my proposal", "score this grant", "evaluate my specific aims", "critique my research strategy", "review as an NIH reviewer", "review as an NSF panelist", "give me reviewer feedback", "check my grant proposal", "review my R01", "review my K99", "evaluate my CAREER proposal", "run a mock study section", "review my resubmission", "review this PDF", "check my proposal PDF", "analyze my grant layout", or mentions grant review, proposal critique, NIH scoring, NSF panel review, study section feedback, or proposal PDF review.
From grantnpx claudepluginhub neuromechanist/research-skills --plugin grantThis skill uses the workspace's default tool permissions.
examples/sample-nih-r01-review.mdreferences/nih-career-training-criteria.mdreferences/nih-review-criteria.mdreferences/nsf-review-criteria.mdreferences/review-best-practices.mdreferences/review-output-templates.mdProvides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Builds 3-5 year financial models for startups with cohort revenue projections, cost structures, cash flow, headcount plans, burn rate, runway, and scenario analysis.
Provides structured, actionable grant proposal review in the style of NIH study section or NSF panel reviewers. Evaluates proposals against official scoring criteria and produces feedback from the perspective of senior researchers on scientific review panels.
Activate when the user wants feedback on a grant proposal (specific aims, research strategy, project description) evaluated against NIH or NSF review criteria. The output is a structured review with scores, strengths, weaknesses, and prioritized actionable improvements.
Determine whether the proposal is NIH or NSF, and which mechanism (R01, R21, DP2, CAREER, etc.). This determines the review criteria, scoring system, and expectations.
When the mechanism is not specified in the proposal or by the user, infer from document structure: presence of Specific Aims indicates NIH; a Project Summary with separate Intellectual Merit and Broader Impacts sections indicates NSF. If the mechanism remains ambiguous, ask the user before proceeding.
Use the following table to select the appropriate criteria reference:
| Mechanism | Criteria Reference |
|---|---|
| R01, R21, R03, R15, DP2 | references/nih-review-criteria.md |
| K99/R00, K08, K23 | references/nih-career-training-criteria.md |
| F31, F32 | references/nih-career-training-criteria.md |
| T32 | references/nih-career-training-criteria.md |
| NSF Standard, CAREER, RAPID, EAGER | references/nsf-review-criteria.md |
| Unknown | Infer from document structure or ask the user |
Handle the proposal based on its format:
Markdown or LaTeX: Read directly; no conversion needed.
PDF: Use a two-track approach:
Text extraction -- Read the PDF for content review. Use the Read tool directly on the PDF (Claude can read PDFs natively). For large PDFs (>10 pages), read in page ranges (e.g., pages: "1-10", then pages: "11-20"). For math-heavy or complex-layout PDFs where native reading struggles, convert to markdown via opencite:
uvx opencite convert proposal.pdf -o proposal.md
Visual layout analysis -- Convert each page to PNG for figure sizing and space utilization review:
uv run --with pdf2image --with pillow python -c "
from pdf2image import convert_from_path
pages = convert_from_path('proposal.pdf', dpi=150)
for i, page in enumerate(pages):
page.save(f'proposal_page_{i+1}.png', 'PNG')
"
Note: pdf2image requires poppler as a system dependency (brew install poppler on macOS, apt install poppler-utils on Linux). If poppler is not available, use pdftoppm -png -r 150 proposal.pdf proposal_page directly, or fall back to reading the PDF natively with the Read tool.
Read each page image to assess:
Include space utilization observations in the review output under a "Layout and Space Utilization" section.
Read all submitted sections. For NIH proposals, this typically includes:
For NSF proposals:
NIH Scoring (1-9 scale):
For each of the five review criteria, assign a score and provide justification:
| Criterion | Key Questions |
|---|---|
| Significance | Is the problem important? Will the field advance? |
| Investigator(s) | Is the team qualified? Sufficient preliminary data? |
| Innovation | Are concepts/methods novel? Does it challenge the status quo? |
| Approach | Is the design rigorous? Are methods appropriate? Feasibility? |
| Environment | Does the institution support the work? |
Score descriptors:
NSF Rating:
Synthesize across criteria. Focus on:
Structure the output as described below.
Structure the review output according to the appropriate agency template in references/review-output-templates.md. Both NIH and NSF templates follow this general structure:
For a complete worked example, see examples/sample-nih-r01-review.md.
Adopt the viewpoint of a senior researcher on a study section or review panel:
For common reviewer comments and their meanings, consult references/review-best-practices.md.
references/nih-review-criteria.md - Complete NIH review criteria, scoring rubric, and study section processreferences/nih-career-training-criteria.md - Review criteria for K, F, and T32 mechanismsreferences/nsf-review-criteria.md - Complete NSF review criteria and panel processreferences/review-best-practices.md - Best practices from experienced reviewers, common reviewer comments, and calibration guidancereferences/review-output-templates.md - NIH and NSF review output format templatesexamples/sample-nih-r01-review.md - Complete example review demonstrating expected format, tone, and scoring calibration