Write publication-ready ML/AI papers for NeurIPS, ICML, ICLR, ACL, AAAI, COLM. Use when drafting papers from research repos, structuring arguments, verifying citations, or preparing camera-ready submissions. Includes LaTeX templates, reviewer guidelines, and citation verification workflows.
Generates publication-ready machine learning papers for top AI conferences using LaTeX templates and verified citations.
npx claudepluginhub luqmannurhakimbazman/kapitan-marketplaceThis skill inherits all available tools. When active, it can use any tool Claude has access to.
references/checklists.mdreferences/citation-workflow.mdreferences/reviewer-guidelines.mdreferences/sources.mdreferences/writing-guide.mdtemplates/README.mdtemplates/aaai2026/README.mdtemplates/aaai2026/aaai2026-unified-supp.textemplates/aaai2026/aaai2026-unified-template.textemplates/aaai2026/aaai2026.bibtemplates/aaai2026/aaai2026.bsttemplates/aaai2026/aaai2026.stytemplates/acl/README.mdtemplates/acl/acl.stytemplates/acl/acl_latex.textemplates/acl/acl_lualatex.textemplates/acl/acl_natbib.bsttemplates/acl/anthology.bib.txttemplates/acl/custom.bibtemplates/acl/formatting.mdExpert-level guidance for writing publication-ready papers targeting NeurIPS, ICML, ICLR, ACL, AAAI, and COLM. This skill combines writing philosophy from top researchers (Nanda, Farquhar, Karpathy, Lipton, Steinhardt) with practical tools: LaTeX templates, citation verification APIs, and conference checklists.
Paper writing is collaborative, but Claude should be proactive in delivering drafts.
The typical workflow starts with a research repository containing code, results, and experimental artifacts. Claude's role is to:
Key Principle: Be proactive. If the repo and results are clear, deliver a full draft. Don't block waiting for feedback on every section—scientists are busy. Produce something concrete they can react to, then iterate based on their response.
This is the most important rule in academic writing with AI assistance.
AI-generated citations have a ~40% error rate. Hallucinated references—papers that don't exist, wrong authors, incorrect years, fabricated DOIs—are a serious form of academic misconduct that can result in desk rejection or retraction.
NEVER generate BibTeX entries from memory. ALWAYS fetch programmatically.
| Action | ✅ Correct | ❌ Wrong |
|---|---|---|
| Adding a citation | Search API → verify → fetch BibTeX | Write BibTeX from memory |
| Uncertain about a paper | Mark as [CITATION NEEDED] | Guess the reference |
| Can't find exact paper | Note: "placeholder - verify" | Invent similar-sounding paper |
If you cannot programmatically verify a citation, you MUST:
% EXPLICIT PLACEHOLDER - requires human verification
\cite{PLACEHOLDER_author2024_verify_this} % TODO: Verify this citation exists
Always tell the scientist: "I've marked [X] citations as placeholders that need verification. I could not confirm these papers exist."
For the best paper search experience, install Exa MCP which provides real-time academic search:
Claude Code:
claude mcp add exa -- npx -y mcp-remote "https://mcp.exa.ai/mcp"
Cursor / VS Code (add to MCP settings):
{
"mcpServers": {
"exa": {
"type": "http",
"url": "https://mcp.exa.ai/mcp"
}
}
}
Exa MCP enables searches like:
Then verify results with Semantic Scholar API and fetch BibTeX via DOI.
When beginning paper writing, start by understanding the project:
Project Understanding:
- [ ] Step 1: Explore the repository structure
- [ ] Step 2: Read README, existing docs, and key results
- [ ] Step 3: Identify the main contribution with the scientist
- [ ] Step 4: Find papers already cited in the codebase
- [ ] Step 5: Search for additional relevant literature
- [ ] Step 6: Outline the paper structure together
- [ ] Step 7: Draft sections iteratively with feedback
Step 1: Explore the Repository
# Understand project structure
ls -la
find . -name "*.py" | head -20
find . -name "*.md" -o -name "*.txt" | xargs grep -l -i "result\|conclusion\|finding"
Look for:
README.md - Project overview and claimsresults/, outputs/, experiments/ - Key findingsconfigs/ - Experimental settings.bib files or citation referencesStep 2: Identify Existing Citations
Check for papers already referenced in the codebase:
# Find existing citations
grep -r "arxiv\|doi\|cite" --include="*.md" --include="*.bib" --include="*.py"
find . -name "*.bib"
These are high-signal starting points for Related Work—the scientist has already deemed them relevant.
Step 3: Clarify the Contribution
Before writing, explicitly confirm with the scientist:
"Based on my understanding of the repo, the main contribution appears to be [X]. The key results show [Y]. Is this the framing you want for the paper, or should we emphasize different aspects?"
Never assume the narrative—always verify with the human.
Step 4: Search for Additional Literature
Use web search to find relevant papers:
Search queries to try:
- "[main technique] + [application domain]"
- "[baseline method] comparison"
- "[problem name] state-of-the-art"
- Author names from existing citations
Then verify and retrieve BibTeX using the citation workflow below.
Step 5: Deliver a First Draft
Be proactive—deliver a complete draft rather than asking permission for each section.
If the repo provides clear results and the contribution is apparent:
If genuinely uncertain about framing or major claims:
Questions to include with the draft (not before):
Use this skill when:
Always remember: First drafts are starting points for discussion, not final outputs.
Default: Be proactive. Deliver drafts, then iterate.
| Confidence Level | Action |
|---|---|
| High (clear repo, obvious contribution) | Write full draft, deliver, iterate on feedback |
| Medium (some ambiguity) | Write draft with flagged uncertainties, continue |
| Low (major unknowns) | Ask 1-2 targeted questions, then draft |
Draft first, ask with the draft (not before):
| Section | Draft Autonomously | Flag With Draft |
|---|---|---|
| Abstract | Yes | "Framed contribution as X—adjust if needed" |
| Introduction | Yes | "Emphasized problem Y—correct if wrong" |
| Methods | Yes | "Included details A, B, C—add missing pieces" |
| Experiments | Yes | "Highlighted results 1, 2, 3—reorder if needed" |
| Related Work | Yes | "Cited papers X, Y, Z—add any I missed" |
Only block for input when:
Don't block for:
The single most critical insight: Your paper is not a collection of experiments—it's a story with one clear contribution supported by evidence.
Every successful ML paper centers on what Neel Nanda calls "the narrative": a short, rigorous, evidence-based technical story with a takeaway readers care about.
Three Pillars (must be crystal clear by end of introduction):
| Pillar | Description | Example |
|---|---|---|
| The What | 1-3 specific novel claims within cohesive theme | "We prove that X achieves Y under condition Z" |
| The Why | Rigorous empirical evidence supporting claims | Strong baselines, experiments distinguishing hypotheses |
| The So What | Why readers should care | Connection to recognized community problems |
If you cannot state your contribution in one sentence, you don't yet have a paper.
Copy this checklist and track progress. Each step involves drafting → feedback → revision:
Paper Writing Progress:
- [ ] Step 1: Define the one-sentence contribution (with scientist)
- [ ] Step 2: Draft Figure 1 → get feedback → revise
- [ ] Step 3: Draft abstract → get feedback → revise
- [ ] Step 4: Draft introduction → get feedback → revise
- [ ] Step 5: Draft methods → get feedback → revise
- [ ] Step 6: Draft experiments → get feedback → revise
- [ ] Step 7: Draft related work → get feedback → revise
- [ ] Step 8: Draft limitations → get feedback → revise
- [ ] Step 9: Complete paper checklist (required)
- [ ] Step 10: Final review cycle and submission
Step 1: Define the One-Sentence Contribution
This step requires explicit confirmation from the scientist.
Before writing anything, articulate and verify:
"I propose framing the contribution as: '[one sentence]'. Does this capture what you see as the main takeaway? Should we adjust the emphasis?"
Step 2: Draft Figure 1
Figure 1 deserves special attention—many readers skip directly to it.
Step 3: Write Abstract (5-Sentence Formula)
From Sebastian Farquhar (DeepMind):
1. What you achieved: "We introduce...", "We prove...", "We demonstrate..."
2. Why this is hard and important
3. How you do it (with specialist keywords for discoverability)
4. What evidence you have
5. Your most remarkable number/result
Delete generic openings like "Large language models have achieved remarkable success..."
Step 4: Write Introduction (1-1.5 pages max)
Must include:
Step 5: Methods Section
Enable reimplementation:
Step 6: Experiments Section
For each experiment, explicitly state:
Requirements:
Step 7: Related Work
Organize methodologically, not paper-by-paper:
Good: "One line of work uses Floogledoodle's assumption [refs] whereas we use Doobersnoddle's assumption because..."
Bad: "Snap et al. introduced X while Crackle et al. introduced Y."
Cite generously—reviewers likely authored relevant papers.
Step 8: Limitations Section (REQUIRED)
All major conferences require this. Counter-intuitively, honesty helps:
Step 9: Paper Checklist
NeurIPS, ICML, and ICLR all require paper checklists. See references/checklists.md.
This section distills the most important writing principles from leading ML researchers. These aren't optional style suggestions—they're what separates accepted papers from rejected ones.
"A paper is a short, rigorous, evidence-based technical story with a takeaway readers care about." — Neel Nanda
This skill synthesizes writing philosophy from researchers who have published extensively at top venues:
| Source | Key Contribution | Link |
|---|---|---|
| Neel Nanda (Google DeepMind) | The Narrative Principle, What/Why/So What framework | How to Write ML Papers |
| Sebastian Farquhar (DeepMind) | 5-sentence abstract formula | How to Write ML Papers |
| Gopen & Swan | 7 principles of reader expectations | Science of Scientific Writing |
| Zachary Lipton | Word choice, eliminating hedging | Heuristics for Scientific Writing |
| Jacob Steinhardt (UC Berkeley) | Precision, consistent terminology | Writing Tips |
| Ethan Perez (Anthropic) | Micro-level clarity tips | Easy Paper Writing Tips |
| Andrej Karpathy | Single contribution focus | Various lectures |
For deeper dives into any of these, see:
Spend approximately equal time on each of:
Why? Most reviewers form judgments before reaching your methods. Readers encounter your paper as: title → abstract → introduction → figures → maybe the rest.
These principles are based on how readers actually process prose. Violating them forces readers to spend cognitive effort on structure rather than content.
| Principle | Rule | Example |
|---|---|---|
| Subject-verb proximity | Keep subject and verb close | ❌ "The model, which was trained on..., achieves" → ✅ "The model achieves... after training on..." |
| Stress position | Place emphasis at sentence ends | ❌ "Accuracy improves by 15% when using attention" → ✅ "When using attention, accuracy improves by 15%" |
| Topic position | Put context first, new info after | ✅ "Given these constraints, we propose..." |
| Old before new | Familiar info → unfamiliar info | Link backward, then introduce new |
| One unit, one function | Each paragraph makes one point | Split multi-point paragraphs |
| Action in verb | Use verbs, not nominalizations | ❌ "We performed an analysis" → ✅ "We analyzed" |
| Context before new | Set stage before presenting | Explain before showing equation |
Full 7 principles with detailed examples: See references/writing-guide.md
These small changes accumulate into significantly clearer prose:
Full micro-tips with examples: See references/writing-guide.md
Understanding reviewer behavior helps prioritize your effort:
| Paper Section | % Reviewers Who Read | Implication |
|---|---|---|
| Abstract | 100% | Must be perfect |
| Introduction | 90%+ (skimmed) | Front-load contribution |
| Figures | Examined before methods | Figure 1 is critical |
| Methods | Only if interested | Don't bury the lede |
| Appendix | Rarely | Put only supplementary details |
Bottom line: If your abstract and intro don't hook reviewers, they may never read your brilliant methods section.
| Conference | Page Limit | Extra for Camera-Ready | Key Requirement |
|---|---|---|---|
| NeurIPS 2025 | 9 pages | +0 | Mandatory checklist, lay summary for accepted |
| ICML 2026 | 8 pages | +1 | Broader Impact Statement required |
| ICLR 2026 | 9 pages | +1 | LLM disclosure required, reciprocal reviewing |
| ACL 2025 | 8 pages (long) | varies | Limitations section mandatory |
| AAAI 2026 | 7 pages | +1 | Strict style file adherence |
| COLM 2025 | 9 pages | +1 | Focus on language models |
Universal Requirements:
LaTeX Templates: See templates/ directory for all conference templates.
Always copy the entire template directory first, then write within it.
Template Setup Checklist:
- [ ] Step 1: Copy entire template directory to new project
- [ ] Step 2: Verify template compiles as-is (before any changes)
- [ ] Step 3: Read the template's example content to understand structure
- [ ] Step 4: Replace example content section by section
- [ ] Step 5: Keep template comments/examples as reference until done
- [ ] Step 6: Clean up template artifacts only at the end
Step 1: Copy the Full Template
# Create your paper directory with the complete template
cp -r templates/neurips2025/ ~/papers/my-new-paper/
cd ~/papers/my-new-paper/
# Verify structure is complete
ls -la
# Should see: main.tex, neurips.sty, Makefile, etc.
⚠️ IMPORTANT: Copy the ENTIRE directory, not just main.tex. Templates include:
.sty) - required for compilation.bst) - required for referencesStep 2: Verify Template Compiles First
Before making ANY changes, compile the template as-is:
# Using latexmk (recommended)
latexmk -pdf main.tex
# Or manual compilation
pdflatex main.tex
bibtex main
pdflatex main.tex
pdflatex main.tex
If the unmodified template doesn't compile, fix that first. Common issues:
tlmgr install <package>Step 3: Keep Template Content as Reference
Don't immediately delete all example content. Instead:
% KEEP template examples commented out as you write
% This shows you the expected format
% Template example (keep for reference):
% \begin{figure}[t]
% \centering
% \includegraphics[width=0.8\linewidth]{example-image}
% \caption{Template shows caption style}
% \end{figure}
% Your actual figure:
\begin{figure}[t]
\centering
\includegraphics[width=0.8\linewidth]{your-figure.pdf}
\caption{Your caption following the same style.}
\end{figure}
Step 4: Replace Content Section by Section
Work through the paper systematically:
Replacement Order:
1. Title and authors (anonymize for submission)
2. Abstract
3. Introduction
4. Methods
5. Experiments
6. Related Work
7. Conclusion
8. References (your .bib file)
9. Appendix
For each section:
Step 5: Use Template Macros
Templates often define useful macros. Check the preamble for:
% Common template macros to use:
\newcommand{\method}{YourMethodName} % Consistent method naming
\newcommand{\eg}{e.g.,\xspace} % Proper abbreviations
\newcommand{\ie}{i.e.,\xspace}
\newcommand{\etal}{\textit{et al.}\xspace}
Step 6: Clean Up Only at the End
Only remove template artifacts when paper is nearly complete:
% BEFORE SUBMISSION - remove these:
% - Commented-out template examples
% - Unused packages
% - Template's example figures/tables
% - Lorem ipsum or placeholder text
% KEEP these:
% - All style files (.sty)
% - Bibliography style (.bst)
% - Required packages from template
% - Any custom macros you're using
| Pitfall | Problem | Solution |
|---|---|---|
Copying only main.tex | Missing .sty, won't compile | Copy entire directory |
Modifying .sty files | Breaks conference formatting | Never edit style files |
| Adding random packages | Conflicts, breaks template | Only add if necessary |
| Deleting template content too early | Lose formatting reference | Keep as comments until done |
| Not compiling frequently | Errors accumulate | Compile after each section |
| Conference | Main File | Key Style File | Notes |
|---|---|---|---|
| NeurIPS 2025 | main.tex | neurips.sty | Has Makefile |
| ICML 2026 | example_paper.tex | icml2026.sty | Includes algorithm packages |
| ICLR 2026 | iclr2026_conference.tex | iclr2026_conference.sty | Has math_commands.tex |
| ACL | acl_latex.tex | acl.sty | Strict formatting |
| AAAI 2026 | aaai2026-unified-template.tex | aaai2026.sty | Very strict compliance |
| COLM 2025 | colm2025_conference.tex | colm2025_conference.sty | Similar to ICLR |
When a paper is rejected or withdrawn from one venue and resubmitted to another, format conversion is required. This is a common workflow in ML research.
Format Conversion Checklist:
- [ ] Step 1: Identify source and target template differences
- [ ] Step 2: Create new project with target template
- [ ] Step 3: Copy content sections (not preamble)
- [ ] Step 4: Adjust page limits and content
- [ ] Step 5: Update conference-specific requirements
- [ ] Step 6: Verify compilation and formatting
Step 1: Key Template Differences
| From → To | Page Change | Key Adjustments |
|---|---|---|
| NeurIPS → ICML | 9 → 8 pages | Cut 1 page, add Broader Impact if missing |
| ICML → ICLR | 8 → 9 pages | Can expand experiments, add LLM disclosure |
| NeurIPS → ACL | 9 → 8 pages | Restructure for NLP conventions, add Limitations |
| ICLR → AAAI | 9 → 7 pages | Significant cuts needed, strict style adherence |
| Any → COLM | varies → 9 | Reframe for language model focus |
Step 2: Content Migration (NOT Template Merge)
Never copy LaTeX preambles between templates. Instead:
# 1. Start fresh with target template
cp -r templates/icml2026/ new_submission/
# 2. Copy ONLY content sections from old paper
# - Abstract text
# - Section content (between \section{} commands)
# - Figures and tables
# - Bibliography entries
# 3. Paste into target template structure
Step 3: Adjusting for Page Limits
When cutting pages (e.g., NeurIPS 9 → AAAI 7):
When expanding (e.g., ICML 8 → ICLR 9):
Step 4: Conference-Specific Adjustments
| Target Venue | Required Additions |
|---|---|
| ICML | Broader Impact Statement (after conclusion) |
| ICLR | LLM usage disclosure, reciprocal reviewing agreement |
| ACL/EMNLP | Limitations section (mandatory), Ethics Statement |
| AAAI | Strict adherence to style file (no modifications) |
| NeurIPS | Paper checklist (appendix), lay summary if accepted |
Step 5: Update References
% Remove self-citations that reveal identity (for blind review)
% Update any "under review" citations to published versions
% Add new relevant work published since last submission
Step 6: Addressing Previous Reviews
When resubmitting after rejection:
Common Conversion Pitfalls:
\usepackage commands (causes conflicts)\bibliography{} path⚠️ CRITICAL: AI-generated citations have ~40% error rate. Never write BibTeX from memory.
IF you cannot programmatically fetch a citation:
→ Mark it as [CITATION NEEDED] or [PLACEHOLDER - VERIFY]
→ Tell the scientist explicitly
→ NEVER invent a plausible-sounding reference
Citation Verification (MANDATORY for every citation):
- [ ] Step 1: Search using Exa MCP or Semantic Scholar API
- [ ] Step 2: Verify paper exists in 2+ sources (Semantic Scholar + arXiv/CrossRef)
- [ ] Step 3: Retrieve BibTeX via DOI (programmatically, not from memory)
- [ ] Step 4: Verify the claim you're citing actually appears in the paper
- [ ] Step 5: Add verified BibTeX to bibliography
- [ ] Step 6: If ANY step fails → mark as placeholder, inform scientist
Step 0: Use Exa MCP for Initial Search (Recommended)
If Exa MCP is installed, use it to find relevant papers:
Search: "RLHF language model alignment 2023"
Search: "sparse autoencoders interpretability"
Search: "attention mechanism transformers Vaswani"
Then verify each result with Semantic Scholar and fetch BibTeX via DOI.
Step 1: Search Semantic Scholar
from semanticscholar import SemanticScholar
sch = SemanticScholar()
results = sch.search_paper("attention mechanism transformers", limit=5)
for paper in results:
print(f"{paper.title} - {paper.paperId}")
print(f" DOI: {paper.externalIds.get('DOI', 'N/A')}")
Step 2: Verify Existence
Confirm paper appears in at least two sources (Semantic Scholar + CrossRef/arXiv).
Step 3: Retrieve BibTeX via DOI
import requests
def doi_to_bibtex(doi: str) -> str:
"""Get verified BibTeX from DOI via CrossRef."""
response = requests.get(
f"https://doi.org/{doi}",
headers={"Accept": "application/x-bibtex"}
)
response.raise_for_status()
return response.text
# Example
bibtex = doi_to_bibtex("10.48550/arXiv.1706.03762")
print(bibtex)
Step 4: Verify Claims
Before citing for a specific claim, access the paper and confirm the attributed claim actually appears.
Step 5: Handle Failures Explicitly
If you cannot verify a citation at ANY step:
% Option 1: Explicit placeholder
\cite{PLACEHOLDER_smith2023_verify} % TODO: Could not verify - scientist must confirm
% Option 2: Note in text
... as shown in prior work [CITATION NEEDED - could not verify Smith et al. 2023].
Always inform the scientist:
"I could not verify the following citations and have marked them as placeholders:
- Smith et al. 2023 on reward hacking - could not find in Semantic Scholar
- Jones 2022 on scaling laws - found similar paper but different authors Please verify these before submission."
| Situation | Action |
|---|---|
| Found paper, got DOI, fetched BibTeX | ✅ Use the citation |
| Found paper, no DOI | ✅ Use arXiv BibTeX or manual entry from paper |
| Paper exists but can't fetch BibTeX | ⚠️ Mark placeholder, inform scientist |
| Uncertain if paper exists | ❌ Mark [CITATION NEEDED], inform scientist |
| "I think there's a paper about X" | ❌ NEVER cite - search first or mark placeholder |
🚨 NEVER generate BibTeX from memory—always fetch programmatically. 🚨
See references/citation-workflow.md for complete API documentation.
Issue: Abstract too generic
Delete first sentence if it could be prepended to any ML paper. Start with your specific contribution.
Issue: Introduction exceeds 1.5 pages
Split background into Related Work. Front-load contribution bullets. Methods should start by page 2-3.
Issue: Experiments lack explicit claims
Add sentence before each experiment: "This experiment tests whether [specific claim]..."
Issue: Reviewers find paper hard to follow
Issue: Missing statistical significance
Always include:
Reviewers assess papers on four dimensions:
| Criterion | What Reviewers Look For |
|---|---|
| Quality | Technical soundness, well-supported claims |
| Clarity | Clear writing, reproducible by experts |
| Significance | Community impact, advances understanding |
| Originality | New insights (doesn't require new method) |
Scoring (NeurIPS 6-point scale):
See references/reviewer-guidelines.md for detailed reviewer instructions.
Use booktabs LaTeX package for professional tables:
\usepackage{booktabs}
\begin{tabular}{lcc}
\toprule
Method & Accuracy ↑ & Latency ↓ \\
\midrule
Baseline & 85.2 & 45ms \\
\textbf{Ours} & \textbf{92.1} & 38ms \\
\bottomrule
\end{tabular}
Rules:
| Document | Contents |
|---|---|
| writing-guide.md | Gopen & Swan 7 principles, Ethan Perez micro-tips, word choice |
| citation-workflow.md | Citation APIs, Python code, BibTeX management |
| checklists.md | NeurIPS 16-item, ICML, ICLR, ACL requirements |
| reviewer-guidelines.md | Evaluation criteria, scoring, rebuttals |
| sources.md | Complete bibliography of all sources |
Templates in templates/ directory: ICML 2026, ICLR 2026, NeurIPS 2025, ACL/EMNLP, AAAI 2026, COLM 2025.
Compiling to PDF:
latexmk -pdf main.tex or pdflatex + bibtex workflowSee templates/README.md for detailed setup instructions.
Writing Philosophy:
APIs: Semantic Scholar | CrossRef | arXiv
Expert guidance for Next.js Cache Components and Partial Prerendering (PPR). **PROACTIVE ACTIVATION**: Use this skill automatically when working in Next.js projects that have `cacheComponents: true` in their next.config.ts/next.config.js. When this config is detected, proactively apply Cache Components patterns and best practices to all React Server Component implementations. **DETECTION**: At the start of a session in a Next.js project, check for `cacheComponents: true` in next.config. If enabled, this skill's patterns should guide all component authoring, data fetching, and caching decisions. **USE CASES**: Implementing 'use cache' directive, configuring cache lifetimes with cacheLife(), tagging cached data with cacheTag(), invalidating caches with updateTag()/revalidateTag(), optimizing static vs dynamic content boundaries, debugging cache issues, and reviewing Cache Component implementations.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.