From academic-research
Parses conference paper reviews, classifies concerns by severity/type, builds per-reviewer response strategies, and drafts venue-compliant rebuttals with experiment placeholders.
npx claudepluginhub jeandiable/academic-research-plugin --plugin academic-researchThis skill uses the workspace's default tool permissions.
Prepare a grounded, venue-compliant rebuttal for conference paper reviewer feedback. The skill follows a 3-phase pipeline:
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Explores codebases via GitNexus: discover repos, query execution flows, trace processes, inspect symbol callers/callees, and review architecture.
Share bugs, ideas, or general feedback.
Prepare a grounded, venue-compliant rebuttal for conference paper reviewer feedback. The skill follows a 3-phase pipeline:
The skill writes all rebuttal prose except experiment results. For issues requiring new experiments, it writes the surrounding context and inserts [INSERT: ...] placeholders where results should go.
<paper-path> (required)Path to the submitted paper. Can be:
main.tex or first .tex file)<reviews-path> (required)Path to a markdown file containing reviews copied from OpenReview/CMT/HotCRP. Reviewer IDs should be preserved (e.g., ## Reviewer 1, ## Reviewer #2, ## R3).
--venue (optional)Target conference. Options: NeurIPS, ICML, CVPR, ACL, AAAI, ICCV, ICLR.
--char-limit (optional, default: venue-specific or 5000)Character limit for the rebuttal. Overrides venue default. This is the total limit across all reviewer responses.
--plan-only (optional)Stop after Phase 2. Outputs ISSUE_BOARD.md + STRATEGY_PLAN.md + EXPERIMENT_GAPS.md without drafting the rebuttal. Useful for reviewing the strategy before committing to a draft.
--followup (optional)Follow-up round mode. Expects an existing output directory from a prior run. Parses new reviewer comments and generates delta replies only.
Install dependencies:
python -m pip install -r "BASE_DIR/scripts/requirements.txt"
If paper-path is a PDF: read directly with the Read tool (paginate for PDFs over 20 pages).
If paper-path is a directory:
main.tex or the first .tex file found\input{} and \include{} by recursively reading referenced files.tex content as the paper text (equations remain as LaTeX source)Read the entire paper. While reading, compile notes on:
Reviewer 1, Reviewer #1, R1, Reviewer A, or markdown headings (## Reviewer 1). If the format is non-standard, ask the user to clarify reviewer boundaries.REVIEWS_RAW.md in the output directoryFor each reviewer, break down their feedback into discrete atomic concerns. Each concern gets:
| Field | Description |
|---|---|
issue_id | Unique ID: R{reviewer}-C{number} (e.g., R1-C1, R2-C3) |
raw_quote | Verbatim excerpt from the review |
issue_type | One of: novelty, empirical_support, baseline_comparison, theorem_rigor, assumptions, complexity, clarity, reproducibility, practical_significance, other |
severity | critical (blocks acceptance), major (significant concern), minor (nice-to-fix) |
reviewer_stance | Inferred from scores + tone using the lookup table in references/rebuttal_guidelines.md |
needs_experiment | true if addressing this concern requires new experimental results |
status | Initially open for all concerns |
Scan across all reviewers for overlapping concerns:
issue_type and semantic similarityCompute and write a summary:
ISSUE_BOARD.md# Situation Assessment
- Scores: R1 (6/10, lean_accept), R2 (4/10, lean_reject), R3 (5/10, neutral)
- Champions: R1 | Swing voters: R3 | Detractors: R2
- Shared themes: [scalability concerns (R2, R3), missing ablation (R1, R2)]
- Path to acceptance: Convert R3 by addressing scalability + ablation
# Issue Board
| ID | Reviewer | Type | Severity | Quote | Needs Experiment | Status |
|----|----------|------|----------|-------|-----------------|--------|
| R1-C1 | R1 | clarity | minor | "Section 3.2 is hard to follow" | false | open |
| R1-C2 | R1 | empirical_support | major | "Ablation missing for component X" | true | open |
| R2-C1 | R2 | empirical_support | critical | "No comparison with Method Y" | true | open |
| R2-C2 | R2 | novelty | major | "Similar to Z (2024)" | false | open |
| R3-C1 | R3 | complexity | major | "Scalability not demonstrated" | true | open |
For each issue in ISSUE_BOARD.md, assign a response mode using this decision tree (in priority order — prefer the first applicable mode):
Does the reviewer factually misread the paper or miss existing content?
→ direct_clarification — Point to specific section/table/equation they missed
Is there existing evidence in the paper that answers the concern?
→ grounded_evidence — Cite specific numbers, theorems, or results already present
Is this a novelty dispute?
→ nearest_work_delta — Name the closest prior work + exact technical difference
Does the concern require new experimental results to address?
→ additional_experiment — Placeholder in draft, added to EXPERIMENT_GAPS.md
Is the reviewer correct about a limitation?
→ narrow_concession — Acknowledge honestly, then scope the impact narrowly
Is the concern valid but out of scope for this paper?
→ future_work — Commit to future investigation, explain current scope boundary
If multiple modes apply, prefer the one higher in the list (stronger evidence first).
For each issue, write 1-2 sentences describing:
Create EXPERIMENT_GAPS.md listing all needs_experiment: true issues:
# Experiment Gaps
| ID | Issue | Experiment Needed | Metric | Satisfies | Priority |
|----|-------|-------------------|--------|-----------|----------|
| R2-C1 | No comparison with Method Y | Run Method Y on datasets A, B | Accuracy, FLOPs | R2-C1, R3-C1 | P0 (blocks acceptance) |
| R1-C2 | Missing ablation for X | Ablate component X | Accuracy delta | R1-C2 | P1 (strengthens case) |
Calculate character allocation based on --char-limit:
Order reviewers by priority: detractors first (most to gain), then swing voters, then champions.
Write STRATEGY_PLAN.md:
# Response Strategy
## Global Themes (for opener)
1. Scalability: addressed by [approach]
2. Missing ablation: [approach]
## Per-Reviewer Strategy
### R2 (lean_reject → target: neutral+)
| ID | Mode | Angle | Priority |
|----|------|-------|----------|
| R2-C1 | additional_experiment | Run comparison with Y on benchmarks A, B; placeholder until results ready | P0 |
| R2-C2 | nearest_work_delta | Clarify 3 key differences from Z (2024): [diff1], [diff2], [diff3] | P1 |
### R3 (neutral → target: lean_accept)
| ID | Mode | Angle | Priority |
|----|------|-------|----------|
| R3-C1 | additional_experiment | Scale-up experiment on dataset C; shares evidence with R2-C1 | P0 |
### R1 (lean_accept → target: champion)
| ID | Mode | Angle | Priority |
|----|------|-------|----------|
| R1-C1 | direct_clarification | Rewrite Section 3.2 intro paragraph for clarity | P2 |
| R1-C2 | additional_experiment | Ablation study for component X | P1 |
## Character Budget
- Opener: ~600 / 5000 chars
- R2: ~1800 chars (2 issues, 1 critical + 1 major)
- R3: ~1200 chars (1 issue, 1 major)
- R1: ~900 chars (2 issues, 0 critical)
- Closing: ~500 chars
- Total: ~5000 / 5000 limit
--plan-only exit point: If set, present ISSUE_BOARD.md + STRATEGY_PLAN.md + EXPERIMENT_GAPS.md to the user and stop.
Otherwise: Present the strategy plan to the user. Ask: "Does this strategy look right? Adjust any response modes, angles, or priorities before I draft the rebuttal." Wait for confirmation before proceeding to Phase 3.
Write REBUTTAL_DRAFT.md following the confirmed strategy plan.
Structure:
Global opener (10-15% of budget)
Per-reviewer responses (75-80% of budget, in priority order) For each issue, follow this pattern:
additional_experiment issues: write full surrounding prose but replace results with [INSERT: description of what goes here, e.g., "accuracy comparison between our method and Method Y on datasets A, B (Table format: Method | Dataset A | Dataset B)"]Closing (5-10% of budget)
[INSERT: ...] marked)Drafting heuristics:
Hard rules:
[INSERT: ...] placeholderRun 5 checks on the draft and write results to LINT_REPORT.md:
Coverage check — For every issue in ISSUE_BOARD.md, verify there is a corresponding response in the draft. Flag any missing issues.
Provenance check — For every factual claim in the draft:
[INSERT: ...] placeholders: verify correct formattingTone check — Flag these problematic patterns:
Consistency check — Verify no contradictions across reviewer replies (e.g., telling R1 "we do X" and R2 "we don't do X")
Character count check — Count exact characters in the draft. If over the limit, compress using this priority:
Re-read the entire draft from the perspective of an adversarial meta-reviewer. Systematically check:
[INSERT: ...]?Write findings to STRESS_TEST.md with a verdict: safe_to_submit | needs_revision.
If needs_revision: apply minimal grounded fixes (no invented evidence), re-run the lint checks, and produce the final version. Maximum 1 revision round. If still problematic after revision, flag remaining issues to user for manual intervention.
Produce two versions:
PASTE_READY.txt — Strict venue-compliant version
[INSERT: ...] placeholders preserved for user to fillREBUTTAL_DRAFT_rich.md — Extended version
[OPTIONAL — cut if over limit] for easy trimmingPresent to user with:
[INSERT: ...] placeholders that need filling--followup)When re-invoked with --followup:
Load state: Read existing output directory (requires ISSUE_BOARD.md, STRATEGY_PLAN.md, REBUTTAL_DRAFT.md at minimum). If directory is missing or incomplete, ask user for the correct path.
Parse new comments: Read the updated reviews file. Identify new reviewer comments that weren't in the original REVIEWS_RAW.md.
Link or create issues: For each new comment:
Draft delta reply: Write responses to new comments only — not a full rewrite. Reference prior rebuttal responses where relevant.
Validate: Re-run lint checks and stress test on the delta reply.
Save: Append to FOLLOWUP_LOG.md with round number and timestamp.
Follow-up rules:
All outputs are saved to: ./output/rebuttal/YYYY-MM-DD-HHMMSS/
./output/rebuttal/YYYY-MM-DD-HHMMSS/
├── REVIEWS_RAW.md # Verbatim copy of input reviews
├── ISSUE_BOARD.md # Phase 1: classified concerns + situation assessment
├── STRATEGY_PLAN.md # Phase 2: per-reviewer response strategy + character budget
├── EXPERIMENT_GAPS.md # Phase 2: experiments needed with priorities
├── REBUTTAL_DRAFT.md # Phase 3: working draft
├── REBUTTAL_DRAFT_rich.md # Phase 3: extended version with optional sections
├── PASTE_READY.txt # Phase 3: venue-compliant plain text for submission
├── LINT_REPORT.md # Phase 3: automated lint check results
├── STRESS_TEST.md # Phase 3: adversarial self-review findings
└── FOLLOWUP_LOG.md # Follow-up round responses (if --followup)
--venue ensures correct character limits and format--plan-only first: Review the strategy before committing to a full draft, especially for contentious reviews[INSERT: ...] markers with actual resultsnarrow_concession, trust that framing — conceding gracefully is stronger than arguing weaklynearest_work_delta depends on knowledge of the fieldpaper-reviewing — Generate conference-style reviews (useful for self-review before submission)paper-polishing — Get ICML meta-review style feedback on draftscitation-assistant — Find and insert missing citationsliterature-survey — Survey related work for novelty defense