Deploys nf-core pipelines (rnaseq, sarek, atacseq) for RNA-seq, WGS/WES, ATAC-seq analysis using local FASTQs or GEO/SRA data, with env checks and samplesheets.
From bio-researchnpx claudepluginhub 8gg-git/knowledge-work-plugins --plugin bio-researchThis skill uses the workspace's default tool permissions.
LICENSE.txtreferences/geo-sra-acquisition.mdreferences/installation.mdreferences/pipelines/atacseq.mdreferences/pipelines/rnaseq.mdreferences/pipelines/sarek.mdreferences/troubleshooting.mdscripts/check_environment.pyscripts/config/genomes.yamlscripts/config/pipelines/atacseq.yamlscripts/config/pipelines/rnaseq.yamlscripts/config/pipelines/sarek.yamlscripts/detect_data_type.pyscripts/generate_samplesheet.pyscripts/manage_genomes.pyscripts/sra_geo_fetch.pyscripts/utils/__init__.pyscripts/utils/file_discovery.pyscripts/utils/ncbi_utils.pyscripts/utils/sample_inference.pyGenerates step-by-step plans for multi-session engineering projects with self-contained step contexts, dependency graphs, parallel detection, and adversarial review. Use for complex multi-PR tasks.
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Run nf-core bioinformatics pipelines on local or public sequencing data.
Target users: Bench scientists and researchers without specialized bioinformatics training who need to run large-scale omics analyses—differential expression, variant calling, or chromatin accessibility analysis.
- [ ] Step 0: Acquire data (if from GEO/SRA)
- [ ] Step 1: Environment check (MUST pass)
- [ ] Step 2: Select pipeline (confirm with user)
- [ ] Step 3: Run test profile (MUST pass)
- [ ] Step 4: Create samplesheet
- [ ] Step 5: Configure & run (confirm genome with user)
- [ ] Step 6: Verify outputs
Skip this step if user has local FASTQ files.
For public datasets, fetch from GEO/SRA first. See references/geo-sra-acquisition.md for the full workflow.
Quick start:
# 1. Get study info
python scripts/sra_geo_fetch.py info GSE110004
# 2. Download (interactive mode)
python scripts/sra_geo_fetch.py download GSE110004 -o ./fastq -i
# 3. Generate samplesheet
python scripts/sra_geo_fetch.py samplesheet GSE110004 --fastq-dir ./fastq -o samplesheet.csv
DECISION POINT: After fetching study info, confirm with user:
Then continue to Step 1.
Run first. Pipeline will fail without passing environment.
python scripts/check_environment.py
All critical checks must pass. If any fail, provide fix instructions:
| Problem | Fix |
|---|---|
| Not installed | Install from https://docs.docker.com/get-docker/ |
| Permission denied | sudo usermod -aG docker $USER then re-login |
| Daemon not running | sudo systemctl start docker |
| Problem | Fix |
|---|---|
| Not installed | curl -s https://get.nextflow.io | bash && mv nextflow ~/bin/ |
| Version < 23.04 | nextflow self-update |
| Problem | Fix |
|---|---|
| Not installed / < 11 | sudo apt install openjdk-11-jdk |
Do not proceed until all checks pass. For HPC/Singularity, see references/troubleshooting.md.
DECISION POINT: Confirm with user before proceeding.
| Data Type | Pipeline | Version | Goal |
|---|---|---|---|
| RNA-seq | rnaseq | 3.22.2 | Gene expression |
| WGS/WES | sarek | 3.7.1 | Variant calling |
| ATAC-seq | atacseq | 2.1.2 | Chromatin accessibility |
Auto-detect from data:
python scripts/detect_data_type.py /path/to/data
For pipeline-specific details:
Validates environment with small data. MUST pass before real data.
nextflow run nf-core/<pipeline> -r <version> -profile test,docker --outdir test_output
| Pipeline | Command |
|---|---|
| rnaseq | nextflow run nf-core/rnaseq -r 3.22.2 -profile test,docker --outdir test_rnaseq |
| sarek | nextflow run nf-core/sarek -r 3.7.1 -profile test,docker --outdir test_sarek |
| atacseq | nextflow run nf-core/atacseq -r 2.1.2 -profile test,docker --outdir test_atacseq |
Verify:
ls test_output/multiqc/multiqc_report.html
grep "Pipeline completed successfully" .nextflow.log
If test fails, see references/troubleshooting.md.
python scripts/generate_samplesheet.py /path/to/data <pipeline> -o samplesheet.csv
The script:
For sarek: Script prompts for tumor/normal status if not auto-detected.
python scripts/generate_samplesheet.py --validate samplesheet.csv <pipeline>
rnaseq:
sample,fastq_1,fastq_2,strandedness
SAMPLE1,/abs/path/R1.fq.gz,/abs/path/R2.fq.gz,auto
sarek:
patient,sample,lane,fastq_1,fastq_2,status
patient1,tumor,L001,/abs/path/tumor_R1.fq.gz,/abs/path/tumor_R2.fq.gz,1
patient1,normal,L001,/abs/path/normal_R1.fq.gz,/abs/path/normal_R2.fq.gz,0
atacseq:
sample,fastq_1,fastq_2,replicate
CONTROL,/abs/path/ctrl_R1.fq.gz,/abs/path/ctrl_R2.fq.gz,1
python scripts/manage_genomes.py check <genome>
# If not installed:
python scripts/manage_genomes.py download <genome>
Common genomes: GRCh38 (human), GRCh37 (legacy), GRCm39 (mouse), R64-1-1 (yeast), BDGP6 (fly)
DECISION POINT: Confirm with user:
nextflow run nf-core/<pipeline> \
-r <version> \
-profile docker \
--input samplesheet.csv \
--outdir results \
--genome <genome> \
-resume
Key flags:
-r: Pin version-profile docker: Use Docker (or singularity for HPC)--genome: iGenomes key-resume: Continue from checkpointResource limits (if needed):
--max_cpus 8 --max_memory '32.GB' --max_time '24.h'
ls results/multiqc/multiqc_report.html
grep "Pipeline completed successfully" .nextflow.log
rnaseq:
results/star_salmon/salmon.merged.gene_counts.tsv - Gene countsresults/star_salmon/salmon.merged.gene_tpm.tsv - TPM valuessarek:
results/variant_calling/*/ - VCF filesresults/preprocessing/recalibrated/ - BAM filesatacseq:
results/macs2/narrowPeak/ - Peak callsresults/bwa/mergedLibrary/bigwig/ - Coverage tracksFor common exit codes and fixes, see references/troubleshooting.md.
nextflow run nf-core/<pipeline> -resume
This skill is provided as a prototype example demonstrating how to integrate nf-core bioinformatics pipelines into Claude Code for automated analysis workflows. The current implementation supports three pipelines (rnaseq, sarek, and atacseq), serving as a foundation that enables the community to expand support to the full set of nf-core pipelines.
It is intended for educational and research purposes and should not be considered production-ready without appropriate validation for your specific use case. Users are responsible for ensuring their computing environment meets pipeline requirements and for verifying analysis results.
Anthropic does not guarantee the accuracy of bioinformatics outputs, and users should follow standard practices for validating computational analyses. This integration is not officially endorsed by or affiliated with the nf-core community.
When publishing results, cite the appropriate pipeline. Citations are available in each nf-core repository's CITATIONS.md file (e.g., https://github.com/nf-core/rnaseq/blob/3.22.2/CITATIONS.md).