Run a rigorous pre-submission peer review of a neuroscience manuscript. Use this skill whenever the user asks to review, critique, or give feedback on a neuroscience paper, draft manuscript, or scientific write-up — whether they say "review my paper", "check my manuscript before submission", "act as a referee for my paper", "give me feedback on this neuroscience paper", or simply upload/paste a neuroscience manuscript and ask for comments. Also trigger when the user mentions fMRI, EEG, connectivity analysis, computational neuroscience, brain dynamics, or related topics in the context of evaluating written work. Covers all neuroscience subfields including systems, cognitive, computational, clinical, and complex-network approaches.
From neuroflownpx claudepluginhub stanislavjiricek/neuroflow --plugin neuroflowThis skill uses the workspace's default tool permissions.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Executes pre-written implementation plans: critically reviews, follows bite-sized steps exactly, runs verifications, tracks progress with checkpoints, uses git worktrees, stops on blockers.
Guides idea refinement into designs: explores context, asks questions one-by-one, proposes approaches, presents sections for approval, writes/review specs before coding.
Perform a rigorous eight-area pre-submission review of a neuroscience manuscript using eight parallel specialist agents. Areas: language, internal consistency, claim validity, statistics, methods reproducibility, contribution novelty, literature gap, and figure review. Produces a single consolidated report with scope annotation.
Applicable to any neuroscience manuscript.
The user can provide the manuscript in any of these ways:
.tex fileOptionally the user can specify a target journal (e.g. "review as if for eLife", "use a NeuroImage referee persona"). If not specified, apply high general standards.
| Abbrev | Editorial persona |
|---|---|
| NatNeurosci | Nature Neuroscience: demands clear conceptual advance, causal evidence, broad relevance |
| NatComms | Nature Communications: high technical standard, multidisciplinary, reproducibility |
| Neuron | Cell Press / Neuron: mechanistic insight or landmark computational contribution |
| eLife | eLife: open-science champion; data/code deposit expected; transparent reporting |
| JNeurosci | Journal of Neuroscience: rigorous methods, clear controls, reproducible |
| PNAS | PNAS: broad significance across disciplines |
| NeuroImage | NeuroImage: fMRI/EEG/MEG technical standards; COBIDAS compliance |
| HBM | Human Brain Mapping: connectivity statistical rigour; OpenNeuro data encouraged |
| CerebCortex | Cerebral Cortex: mechanistic neuroscience, solid anatomy, strong methods |
| ClinNeurophysiol | Clinical Neurophysiology: clinical relevance, validated biomarkers |
| Epilepsia | Epilepsia: epilepsy-specific methodology, ILAE terminology |
| Psychophysiology | Psychophysiology: psychophysiological methods rigour, effect sizes |
| NetworkNeuro | Network Neuroscience: graph-theory methodology, null models, open data |
| PLoSCB | PLoS Computational Biology: model correctness, biological plausibility, code required |
| FrontNeurosci | Frontiers in Neuroscience: solid methods, broad scope, open code encouraged |
| FrontCompNeuro | Frontiers in Computational Neuroscience: modelling rigour, biological plausibility |
| PhysRevE | Physical Review E: mathematical/physical rigour, analytic derivations |
| Chaos | Chaos (AIP): dynamical systems correctness, novelty of neural application |
| Entropy | Entropy (MDPI): information-theoretic framework correctness, open data |
| SciRep | Scientific Reports: technical correctness and reproducibility |
| Brain | Brain (Oxford): clinical neuroscience, translational relevance, mechanistic insight |
| NeurobiolDis | Neurobiology of Disease: disease model rigour, translational relevance |
| BrainCogn | Brain and Cognition: cognitive neuroscience, solid experimental design |
Before launching the review agents, attempt to load reference literature for use in Area 7 (Literature Gap).
Try Zotero MCP first:
Use tool_search to check for available Zotero MCP tools (look for tools matching zotero). If six or more Zotero tools are found:
If Zotero MCP tools are unavailable or return zero results: Silently fall back — do not mention Zotero to the user. Instead:
.neuroflow/ideation/papers/ exists in the working directory.md metadata files there; read title, authors, year, abstract from eachDo not block the review on literature availability. Proceed to Phase 1 regardless.
Spawn eight specialist agents in parallel. Each agent works independently on the same manuscript. Do not wait for one agent to finish before starting the next — launch all eight simultaneously and collect outputs.
Each agent produces one clearly headed section of structured findings. "No issues found" or "Not applicable" is a valid output for any item.
Check and flag:
Check:
Key concern: neuroscience papers frequently over-interpret correlational or directed-statistical-dependence evidence as causal or mechanistic.
Flag:
General:
Multiple comparisons: 6. Whole-brain neuroimaging: voxelwise FWE/FDR or permutation-based correction applied? Flag uncorrected maps. 7. Connectivity matrices (NxN edges): FDR or permutation over the full matrix? 8. Multiple frequency bands, ROIs, or time windows: correction explicit?
Network / connectivity: 9. Null models used to benchmark graph measures (random graph, degree-preserving rewiring, phase-randomised surrogates)? 10. Parcellation choice justified? Robustness to alternative atlases shown? 11. Dynamic FC: sliding-window length justified? Stationarity tested or acknowledged? 12. iEEG / MEG coherence: volume conduction / field spread accounted for?
Information-theoretic / causality: 13. TE / conditional MI: estimator (kernel, KSG, binning) specified and justified? Hyperparameters (embedding dimension, history length) reported and validated against surrogates? 14. Significance threshold set using appropriate surrogates (time-shifted, phase-randomised, permutation)? 15. Effect of filtering on causality measures discussed (known to induce spurious GC)?
Computational modelling: 16. Parameters physiologically constrained and values justified? 17. Fitting / optimisation procedure fully described? 18. Predictions validated against held-out empirical data?
Evaluate each applicable subsection as PASS / PARTIAL / FAIL / N/A with a brief comment.
A. Human subjects
B. Animal studies (ARRIVE 2.0)
C. fMRI (COBIDAS / OHBM)
D. EEG / MEG
E. Intracranial EEG (iEEG / SEEG / ECoG)
F. Computational modelling
G. Information-theoretic / causality measures
H. Data & code availability
Write this section in the first person, as an actual referee report to the target journal (use the persona from Phase 0, or "a generic high-standards referee" if none).
Address:
Identify missing or overlooked prior work that should be cited or engaged with.
Data sources (in priority order):
.neuroflow/ideation/papers/*.md)Check for:
Output format: For each gap, write:
Gap: [brief description of what is missing]
Why it matters: [one sentence — how its absence weakens the paper]
Example work: [if from Zotero/local library: cite entry; otherwise: describe the type of work without fabricating titles]
Do not fabricate specific paper titles, DOIs, or authors. If referencing a type of work that exists but is not in the available library, say so clearly ("A body of work on X exists; representative examples should be cited — see [general area description]").
Evaluate all figures present in the manuscript (uploaded images, PDF pages, or described figures).
For each figure:
If figure files are not directly accessible (text-only manuscript), assess from figure captions and text references and note accordingly.
# PRE-SUBMISSION REVIEW — NEUROSCIENCE
Date: [today's date]
Manuscript: [title, authors if available]
Target journal: [JOURNAL or "Generic high standards"]
Review scope: [Full 8-area review | Abstract-only (areas 4, 5, 8 partially assessed) | {other limitation}]
Literature source: [Zotero ({n} items) | Local library: .neuroflow/ideation/papers/ ({n} files) | Manuscript references only]
---
## Executive Summary
[3–5 sentences: core contribution, main strengths, most critical issues]
## ⚠ Priority Issues (must address before submission)
[Numbered, deduplicated list of the most serious problems across all eight areas]
## 1 · Language & Style
[Agent 1 findings]
## 2 · Internal Consistency
[Agent 2 findings]
## 3 · Claim Support & Causality
[Agent 3 findings]
## 4 · Statistics & Network Inference
[Agent 4 findings]
## 5 · Methods Reproducibility & Open Science
[Agent 5 checklist]
## 6 · Contribution & Novelty [{JOURNAL} referee]
[Agent 6 first-person report]
## 7 · Literature Gap
[Agent 7 findings]
## 8 · Figure Review
[Agent 8 findings]
---
*Generated by review-neuro skill (8-agent parallel review)*
Present the full report to the user.
Immediately after presenting the report, save it automatically:
.neuroflow/review/review-[title-slug]-[date].md. Create .neuroflow/review/ if it does not exist.## milestone header to .neuroflow/sessions/YYYY-MM-DD.md — e.g.:
## HH:MM — [review] Referee report for "[Paper title]" ([Journal]) saved to .neuroflow/review/review-[title-slug]-[date].md — STATUS: [recommendation]
Do not paste the review content into the session log.Then tell the user:
Review saved to
.neuroflow/review/review-[title-slug]-[date].md.Would you like to:
- Expand any section in more detail
- Focus on a specific area for revision guidance
- Re-run the review for a different target journal
Review scope:) and flag that areas 4, 5, and 8 can only be partially assessed.This skill is invoked as part of the /neuroflow:review command. If used directly without that command, run the full review workflow as normal and mention at the end:
💡 You can also run
/neuroflow:reviewto start the peer review workflow as a slash command next time.