From awesome-cognitive-and-neuroscience-skills
Extracts research paradigms, experimental designs, and analysis pipelines from cognitive science papers into structured Claude Code skills. Activates on paper uploads with extraction requests.
npx claudepluginhub neuroaihub/awesome_cognitive_and_neuroscience_skills --plugin awesome-cognitive-and-neuroscience-skillsThis skill uses the workspace's default tool permissions.
An interactive skill for extracting **research paradigms and methodological techniques** from cognitive science and neuroscience papers. The output is a well-structured skill conforming to this project's SKILL.md format.
Converts research papers from arXiv IDs, PDFs, or URLs into executable Claude Code skills via document conversion, methodology extraction, critique, and refinement.
Generates SKILL.md from user domain expertise via interactive prompts and submits to GitHub Issues for review. Supports gh CLI or manual browser method.
Discovers research skills for methodology, literature reviews, systematic reviews, quantitative/qualitative analysis, study design, data collection, and academic writing. Activates automatically for research tasks.
Share bugs, ideas, or general feedback.
An interactive skill for extracting research paradigms and methodological techniques from cognitive science and neuroscience papers. The output is a well-structured skill conforming to this project's SKILL.md format.
Focus: Strict extraction of reproducible methods — experimental designs, data acquisition parameters, processing pipelines, analysis procedures, and stimulus specifications. This is NOT about summarizing a paper's novelty or theoretical contributions.
Activate this skill when the user:
Before extracting skills from a paper, you MUST:
For detailed methodology guidance, see the research-literacy skill.
This skill was generated by AI from academic literature. All parameters, thresholds, and citations require independent verification before use in research. If you find errors, please open an issue.
PDF Reading Guidance — Claude Code's Read tool natively supports PDF files. Use the following strategy:
pages parameter.pages parameter (maximum 20 pages per request). Example sequence: pages: "1-10", then pages: "11-20", and so on.Read pages 1-2 first (abstract + introduction) to identify the paper type and decide whether full extraction is warranted.
Then read the Methods section in detail (locate the relevant page range from the table of contents or section headers).
Read Results and Discussion selectively for reported parameter values not stated in Methods.
Identify the paper type — this determines the extraction strategy:
See
references/extraction-guide.mdfor detailed extraction strategies per paper type.
Scan the paper and identify all extractable methodological content organized into these categories:
| Category | What to Look For |
|---|---|
| Experimental Design | Paradigm name, trial structure, timing parameters, condition setup, counterbalancing scheme, block design |
| Data Acquisition | Sampling rate, electrode montage, imaging parameters, eye-tracking settings, physiological recording setup |
| Data Processing | Preprocessing steps with parameters, artifact handling methods, data cleaning criteria, epoching parameters |
| Analysis Methods | Statistical models, multiple comparison corrections, effect size calculations, visualization methods, decoding approaches |
| Stimulus Materials | Construction rules, control variables, norming standards, presentation parameters, response mappings |
Present candidates to the user in the following format:
I identified the following extractable methods from this paper:
## Experimental Design
- [1] Paradigm: <name> — <brief description>
- [2] Trial structure: <summary of trial flow and timing>
## Data Acquisition
- [3] <Modality> recording setup: <key parameters>
## Data Processing
- [4] Preprocessing pipeline: <step summary>
- [5] Artifact rejection: <method and criteria>
## Analysis Methods
- [6] <Analysis name>: <brief description>
- [7] <Analysis name>: <brief description>
## Stimulus Materials
- [8] <Material type>: <construction approach>
Which items would you like me to extract into skills?
(Enter numbers, ranges like 1-4, or "all")
Before presenting candidates, apply this strict suitability filter to each one:
SUITABLE — include if the candidate:
| Criterion | Examples |
|---|---|
| Describes an experimental paradigm or design with specifics | Trial structure, timing parameters, condition definitions, counterbalancing |
| Describes a data processing pipeline with parameters | Preprocessing steps, filter cutoffs, software settings |
| Describes an analysis method with concrete steps | Statistical model specification, time-frequency decomposition, classification pipeline |
| Contains specific numerical parameters or settings | Thresholds, epoch windows, stimulus dimensions, sample sizes |
| Describes stimulus construction norms | Norming procedures, controlled variables, material selection criteria |
| Describes a computational model with equations/parameters | Model fitting procedure, parameter priors, model comparison strategy |
| Provides actionable methodological recommendations with specific values | "Use minimum 30 trials per condition", "Set high-pass filter no lower than 0.1 Hz" |
NOT SUITABLE — filter out if the candidate:
| Criterion | Examples |
|---|---|
| Is narrative or historical overview | "The study of attention began with William James..." |
| Is a definition without actionable parameters | "Working memory is defined as..." |
| Is theoretical debate without methods | "The modularity hypothesis predicts..." |
| Is motivation or background only | "Previous studies have shown that..." leading to no method |
| Contains only results without methodological detail | "The ANOVA revealed a significant main effect..." |
Decision rule: "Does this candidate contain enough specific, actionable detail that a researcher could REPRODUCE a method, pipeline, or paradigm from it?" If YES → [SUITABLE]. If NO or UNCERTAIN → [FILTERED — reason].
Mark each candidate when presenting to the user. Filtered candidates are shown but de-prioritized — the user can override any filter decision.
references/skill-template.md).name, description, and papers fieldsreferences/ subdirectory for overflowAfter generating the skill but before saving, perform a systematic verification of every numerical parameter and specific factual claim against the source paper.
Verification procedure — for each numerical value or specific claim in the generated skill:
| Issue Type | Description | Severity |
|---|---|---|
not_found | Claim appears in the skill but cannot be found in the source — likely hallucinated | High |
value_mismatch | Value exists in source but differs (e.g., skill says "250 ms", source says "200 ms") | High |
unit_error | Numerical value matches but units are wrong or missing | High |
context_distortion | Value is technically present but used in misleading context | Medium |
location_wrong | Value is correct but the claimed source location is wrong | Low |
incomplete | Skill presents a partial version of a parameter that has important qualifiers | Low |
Reporting — Present the verification results to the user:
Self-Verification Results:
- Claims checked: N
- Verified: M
- Issues found: K
- [HIGH] <claim> — <issue type>: <details>
- [LOW] <claim> — <issue type>: <details>
Rules:
For every extracted item, the following cross-cutting rules apply to ALL categories:
These rules apply to every category below. The parameter tables in generated skills must include a Source Location column (see references/skill-template.md).
When extracting from review papers, meta-analyses, or textbook chapters, capture:
Before presenting the final skill, verify both structural compliance and content quality.
Every generated skill must pass these checks before saving:
SKILL.md (uppercase) — not skill.md, Skill.md, or any other variantmmn-oddball-paradigm/, not MMN_Oddball_Paradigm/name (human-readable) and description (one-sentence summary) fieldspapers field listing the source paper(s) in "Author, Year" formatdependencies.required: [research-literacy] (all domain skills require this)research-literacy skill for the template)references/ subdirectoryreferences/ and are explicitly referenced from SKILL.mdEvery generated skill must include these sections (may be empty if no items apply, but must be explicitly checked):
## Missing Information — List standard parameters for this method type that the paper does not report. Format: "- [Parameter name]: Not reported. Standard value from [field/reference] is [value]." This section helps users know what they must determine independently.## Deviations from Convention — List any methodological choices that deviate from field conventions, with the authors' stated rationale. Format: "- [Choice]: Authors used [X] instead of conventional [Y] because [reason]." This section alerts users to non-standard decisions.When the paper is unclear or omits details:
When a paper contains multiple independent methods worth extracting:
When the user provides multiple PDFs or a directory of papers, apply the following workflow:
Batch mode activates when the user:
## Paper 1: <Title / filename>
- [1] Paradigm: ...
- [2] Analysis: ...
## Paper 2: <Title / filename>
- [3] Paradigm: ...
- [4] Data Acquisition: ...
Which items would you like to extract? (Enter numbers, ranges, "all", or "all from paper 1")
Allow cross-paper skill merging — If two or more papers describe the same or highly overlapping methods (e.g., both use the same EEG preprocessing pipeline with the same parameters), flag the overlap and offer to merge them into a single skill that cites all source papers. Only merge when the core parameters and decision logic are genuinely shared; keep skills separate when parameter choices differ.
Generate skills independently — Each generated skill must be fully self-contained. No skill may depend on another skill generated from a different paper in the same batch. Cross-reference between skills using relative paths only for closely related methods from the same paper (as in Multi-Skill Extraction above).
Before finalizing batch output, verify:
papers frontmatter field.mmn-oddball-paradigm-smith2019 vs. mmn-oddball-paradigm-jones2021).Generated skills follow the directory convention:
skills/<generated-skill-name>/
SKILL.md # Core skill content (<500 lines)
references/
parameter-table.yaml # Full parameter values (if needed)
pipeline-steps.md # Extended pipeline details (if needed)
The <generated-skill-name> uses kebab-case and is descriptive of the specific method, not the paper. For example:
skills/mmn-oddball-paradigm/skills/fmri-preprocessing-pipeline/skills/drift-diffusion-fitting/User: Extract skills from this paper. [provides PDF path]
Agent: [Reads paper]
This is an experimental paper (EEG study on language processing).
Does that match your understanding?
User: Yes.
Agent: I identified the following extractable methods:
[Lists candidates by category with numbers]
Which items would you like me to extract?
User: 1, 3, 4-6
Agent: Here is the detailed extraction for your selected items:
[Shows extracted parameters for review]
Does this look correct? Any corrections needed?
User: Looks good, but the sampling rate was 512 Hz not 500 Hz.
Agent: Corrected. Here is the generated skill:
[Shows SKILL.md content]
Shall I save this to skills/<name>/SKILL.md?
User: Yes.
Agent: [Saves file]
Skill saved to skills/<name>/SKILL.md.