This skill should be used when the user asks to "draft a JOSS paper", "write paper.md", "create JOSS paper", "write JOSS submission", "draft paper.md for JOSS", "convert paper to JOSS format", or mentions writing a JOSS software paper. It discovers existing manuscripts and package content, then drafts a complete paper.md and paper.bib following JOSS format requirements.
From pub-pipelinenpx claudepluginhub queelius/claude-anvil --plugin pub-pipelineThis skill uses the workspace's default tool permissions.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Details PluginEval's skill quality evaluation: 3 layers (static, LLM judge), 10 dimensions, rubrics, formulas, anti-patterns, badges. Use to interpret scores, improve triggering, calibrate thresholds.
Analyze an R/Python package and draft a complete paper.md and paper.bib for JOSS submission. The paper follows current JOSS format requirements (2025+) with all required sections.
Research packages exist in messy, real-world contexts — companion papers, existing LaTeX manuscripts, deep vignettes, .papermill.md state files, ecosystem documentation. Use your intelligence to discover and adapt what already exists rather than generating from scratch. Good prose the author has already written is almost always better than template-generated text.
The template structure below is guidance, not a straitjacket. Use it when starting from nothing; reshape existing content when you find it.
Before generating anything, look for existing paper-like content that can inform or seed the JOSS paper. This is the most important step — skipping it means ignoring potentially excellent prose the author has already written.
Search for existing manuscripts (Glob/Read tools):
paper/ directory — LaTeX (.tex), Markdown (.md), Rmd (.Rmd) manuscriptsinst/paper/, manuscript/, doc/ — alternative locations.papermill.md — papermill state file with thesis, venue, outline, and strategic contextECOSYSTEM.md, RESEARCH-NOTES.md — ecosystem context and strategic notesEvaluate what you find:
Key principle: If the author has a LaTeX paper that already describes the package's purpose, design, and API — reshape it into JOSS format rather than starting from a blank template. Preserve the author's voice, terminology, and framing. Only add what's missing (e.g., State of the Field, AI Disclosure) and trim what's too long.
Thoroughly understand what the package does. Skip files you already read in Step 1.
Read core files (Read tool):
DESCRIPTION / pyproject.toml — title, description, author info, dependenciesREADME.md or README.Rmd — overview, examples, installationCLAUDE.md — architecture and design decisions (if present)NEWS.md / CHANGELOG.md — development history and featuresCITATION.cff or inst/CITATION — existing citation metadataUnderstand the API (Glob/Grep/Read tools):
NAMESPACE (R) or public API (Python)Check vignettes/tutorials (Glob/Read tools):
Identify the research context:
related_work in the user config for declared companion papers, preprints, and sibling packages. If a companion paper exists at a local path, read it for context.Gather author name, ORCID, affiliation, and email. Check these sources before prompting the user:
CITATION.cff or inst/CITATIONDESCRIPTION (R) or pyproject.toml (Python).papermill.md frontmatter.claude/pub-pipeline.local.md (user config for this plugin)deets CLI (if available — a personal metadata store)The information is usually already in the repo somewhere. Only ask the user for what you can't find.
Search for competing/related packages. Name specific packages and explain how this one differs:
How to find competitors (WebSearch/Read/Grep tools):
r.competitors field for pre-identified competitorsThis is critical — JOSS reviewers specifically check the "State of the Field" section for this.
If existing paper content was found in Step 1, use it as the primary source:
.bib filesIf no existing paper content exists, draft from the package analysis in Steps 2-4.
Either way, populate the YAML frontmatter from the user config (.claude/pub-pipeline.local.md, if it exists). Use r.domain and r.audience to inform the Statement of Need. Use r.competitors to seed State of the Field. Use ai_usage fields for the AI Usage Disclosure section. Use related_work entries to identify companion papers (cite them), preprints (reference the DOI), and sibling packages (mention in State of the Field or Software Design).
The paper should follow this structure:
---
title: 'packagename: Short description of what it does'
tags:
- R
- [domain tag 1]
- [domain tag 2]
- [method tag]
authors:
- name: First M. Last
orcid: 0000-0000-0000-0000
affiliation: 1
affiliations:
- name: Department, University
index: 1
date: "DD Month YYYY"
bibliography: paper.bib
---
# Summary
[2-3 paragraphs: What the package does, at a high level, for a
non-specialist audience. Include the core abstraction and key
capabilities. Mention the language and any notable design choices.]
# Statement of Need
[1-2 paragraphs: What research problem this solves. Who needs it
and why. What is the gap in existing tools. Be specific about the
target audience — researchers in X, practitioners doing Y.]
# State of the Field
[1-2 paragraphs: Name specific competing packages. For each,
state what it does and what it lacks. Explain why building this
package was justified. Use citations for each package mentioned.]
# Software Design
[1-2 paragraphs: Key architectural decisions. The core abstraction
(e.g., closure-returning pattern, S3 class hierarchy). Trade-offs
made (e.g., flexibility vs performance, generality vs specificity).
Integration with other packages.]
# Research Impact Statement
[1 paragraph: Evidence of use. Published analyses, ongoing projects,
adoption by other packages. If the package is new, describe the
near-term credible impact based on the research community it serves.]
# AI Usage Disclosure
[1 paragraph: Transparent disclosure. If AI tools were used in
development, state which tools, where they were applied, and confirm
human oversight of all design decisions. If no AI was used, state
that explicitly.]
# Acknowledgements
[Funding sources, contributors, institutional support.]
# References
Writing guidelines:
[@key] for parenthetical citations, @key for in-textCreate paper.bib with BibTeX entries for:
@r_core) or PythonCITATION, vignettes, or existing bibliographyIf an existing .bib file was found in Step 1, import relevant entries rather than rewriting them.
Use standard BibTeX format:
@Manual{r_core,
title = {R: A Language and Environment for Statistical Computing},
author = {{R Core Team}},
organization = {R Foundation for Statistical Computing},
address = {Vienna, Austria},
year = {YYYY}, % use the current year
url = {https://www.R-project.org/},
}
@Article{package_key,
title = {Package Title},
author = {Author Name},
journal = {Journal Name},
year = {2023},
doi = {10.xxxx/xxxxx},
}
After writing, check:
@key citations have matching BibTeX entries%e %B %Y)Place paper.md and paper.bib at the package root or in a paper/ subdirectory. If there's an existing paper/ directory with other content (e.g., a LaTeX manuscript), place alongside it and note the relationship.
For JOSS format specifications and exemplar papers, consult:
${CLAUDE_PLUGIN_ROOT}/docs/joss-reference.md — Complete JOSS paper format spec, YAML fields, citation syntax, math/figure syntax${CLAUDE_PLUGIN_ROOT}/docs/joss-exemplars.md — Real JOSS R package papers with full content, structural analysis, patterns and anti-patternsFor autonomous multi-agent drafting, launch the pub-pipeline:joss-writer agent via the Task tool. It will:
pub-pipeline:field-scout agent to map competing packagespaper.md and paper.bib with all required JOSS sectionsFor critical review of the draft, launch the pub-pipeline:joss-reviewer agent. It spawns pub-pipeline:software-auditor, pub-pipeline:community-auditor, and pub-pipeline:field-scout in parallel, then synthesizes a unified JOSS checklist report.
For lighter-weight review, self-review against the checklist in joss-audit.