Generate publication-ready scientific figures for research papers. Use this skill when: - Creating any figures for manuscripts (Nature, Science, Cell quality) - Generating data plots from CSV/Excel/JSON datasets - Creating conceptual diagrams, flowcharts, or schematics - Composing multi-panel figures with letter labels (A, B, C, D) - Visualizing architectures, timelines, or workflows This skill handles THREE types of figures: 1. DATA PLOTS - Line, scatter, bar, violin, heatmap, etc. from tabular data 2. SCHEMATICS - AI-generated conceptual diagrams and illustrations 3. COMPOSITIONS - Multi-panel figures combining plots and/or schematics Prerequisites: - For plots: pandas, matplotlib, seaborn - For schematics: OPENROUTER_AI_API_KEY environment variable - For composition: matplotlib or PIL
/plugin marketplace add chc273/claude-automation/plugin install chc273-academic-research@chc273/claude-automationThis skill inherits all available tools. When active, it can use any tool Claude has access to.
assets/palettes.jsonassets/paper.mplstyleexamples/compose_2x2_figure.jsonexamples/fig_bar_comparison.jsonexamples/fig_heatmap_corr.jsonexamples/fig_line_stress_strain.jsonexamples/fig_multipanel_2x2.jsonexamples/fig_violin_groups.jsonreferences/plot_catalog.mdreferences/plot_spec.mdreferences/qa_checklist.mdreferences/schematic_guide.mdreferences/style_guide.mdscripts/figure_compose.pyscripts/generate_image.pyscripts/infer_dataset.pyscripts/plot_cli.pyscripts/validate_figure.pyGenerate publication-ready figures for research papers: data plots, conceptual schematics, and multi-panel compositions.
This skill provides three complementary tools:
| Tool | Purpose | Input |
|---|---|---|
plot_cli.py | Data visualization | CSV/Excel/JSON → plots |
generate_image.py | Conceptual diagrams | Text prompt → schematic |
figure_compose.py | Multi-panel figures | Images → composed figure |
skills/scientific-figures/scripts/
├── plot_cli.py # Data plots (seaborn/matplotlib)
├── generate_image.py # AI-generated schematics
├── figure_compose.py # Multi-panel composition
├── validate_figure.py # Quality validation
└── infer_dataset.py # Dataset analysis
# Create PlotSpec and render
python skills/scientific-figures/scripts/plot_cli.py --spec my_plot.json
python skills/scientific-figures/scripts/generate_image.py \
"Create a publication-quality schematic showing [concept]..." \
--output fig1_schematic.png
# Compose panels A, B, C, D into a 2x2 figure
python skills/scientific-figures/scripts/figure_compose.py \
--panels fig1a.png fig1b.png fig1c.png fig1d.png \
--layout 2x2 \
--output Figure1.pdf
plot_cli.py)Generate publication-ready plots from tabular data using seaborn/matplotlib.
| Kind | Use Case |
|---|---|
line | Trends, time series, stress-strain |
scatter | Correlations, x-y relationships |
bar | Group comparisons, counts |
point | Group means with CI/SEM |
box | Distribution comparisons |
violin | Distribution shapes |
swarm | Individual points (small N) |
strip | Jittered points (larger N) |
hist | Single distribution |
kde | Density estimation |
heatmap | Matrices, correlations |
regplot | Scatter with regression |
{
"data": [{"id": "main", "path": "data.csv"}],
"figure": {
"layout": {"type": "single", "figsize_in": [3.35, 2.4]},
"panels": [{
"id": "A",
"kind": "line",
"data_id": "main",
"mapping": {"x": "time", "y": "value", "hue": "group"},
"labels": {"xlabel": "Time (s)", "ylabel": "Value (units)"}
}]
},
"output": {"outdir": "outputs/fig1", "basename": "fig1", "formats": ["pdf", "svg", "png"]}
}
# Render from PlotSpec
python skills/scientific-figures/scripts/plot_cli.py --spec figure.json
# Analyze dataset first
python skills/scientific-figures/scripts/infer_dataset.py data.csv --suggest-plots
See references/plot_spec.md for full schema and references/plot_catalog.md for all plot types.
generate_image.py)Generate AI-powered conceptual diagrams for research papers.
python skills/scientific-figures/scripts/generate_image.py "prompt" --output figure.png
Every prompt MUST include:
Architecture Diagram:
Create a publication-quality neural network architecture diagram.
Vertical stack: Input layer (bottom), 3 hidden layers (middle), Output layer (top).
Arrows showing data flow between layers.
Style: Nature Machine Intelligence quality, clean boxes with thin borders.
Colors: Input (blue #2166AC), hidden (gray #878787), output (orange #D6604D).
White background, sans-serif labels. Aspect ratio: 4:3.
Timeline:
Create a publication-quality timeline showing evolution of deep learning 2012-2024.
Horizontal timeline with milestones: AlexNet 2012, ResNet 2015, Transformer 2017, GPT-3 2020.
Style: Nature journal, clean, white background.
Color coding by era. Sans-serif labels. Aspect ratio: 3:1.
See references/schematic_guide.md for detailed prompt templates.
figure_compose.py)Combine multiple images into publication-ready composite figures with panel labels.
| Layout | Use Case |
|---|---|
1x2 | Two panels side by side |
2x1 | Two panels stacked |
2x2 | Four-panel grid |
1x3 | Three panels in a row |
2x3 | Six-panel grid |
3x2 | Six panels (3 rows × 2 cols) |
# Basic: auto-labels A, B, C...
python skills/scientific-figures/scripts/figure_compose.py \
--panels fig1a.png fig1b.png fig1c.png fig1d.png \
--layout 2x2 \
--output Figure1.pdf
# Custom labels
python skills/scientific-figures/scripts/figure_compose.py \
--panels schematic.png plot.png \
--layout 1x2 \
--labels "a" "b" \
--output Figure2.pdf
# No labels
python skills/scientific-figures/scripts/figure_compose.py \
--panels img1.png img2.png img3.png \
--layout 1x3 \
--no-labels \
--output Figure3.png
# Custom size
python skills/scientific-figures/scripts/figure_compose.py \
--panels a.png b.png c.png d.png \
--layout 2x2 \
--figsize 7 5 \
--dpi 600 \
--output Figure4.pdf
{
"panels": [
{"path": "fig1a_schematic.png", "label": "A"},
{"path": "fig1b_data.png", "label": "B"},
{"path": "fig1c_results.png", "label": "C"},
{"path": "fig1d_comparison.png", "label": "D"}
],
"layout": {"rows": 2, "cols": 2},
"style": {
"figsize_in": [7, 5],
"dpi": 300,
"label_fontsize": 14,
"label_fontweight": "bold",
"label_position": "upper-left",
"spacing": {"wspace": 0.05, "hspace": 0.05}
},
"output": {"path": "Figure1.pdf"}
}
python skills/scientific-figures/scripts/figure_compose.py --spec compose_spec.json
CRITICAL: Individual panels must be generated at the correct size/aspect ratio BEFORE composition to ensure proper fit and consistent quality.
When planning a multi-panel figure, calculate individual panel sizes based on:
Formula:
panel_width = (figure_width - (ncols-1) × spacing - 2 × margin) / ncols
panel_height = (figure_height - (nrows-1) × spacing - 2 × margin) / nrows
| Final Figure | Layout | Panel Size (inches) | Panel Size (pixels @ 300 DPI) |
|---|---|---|---|
| 7" × 5" (double col) | 1×2 | 3.3 × 4.5 | 990 × 1350 |
| 7" × 5" (double col) | 2×1 | 6.5 × 2.2 | 1950 × 660 |
| 7" × 5" (double col) | 2×2 | 3.3 × 2.2 | 990 × 660 |
| 7" × 6" (double col) | 2×2 | 3.3 × 2.7 | 990 × 810 |
| 7" × 4" (double col) | 1×3 | 2.1 × 3.5 | 630 × 1050 |
| 7" × 6" (double col) | 2×3 | 2.1 × 2.7 | 630 × 810 |
| 3.5" × 5" (single col) | 2×1 | 3.2 × 2.2 | 960 × 660 |
| Panel Content | Recommended Aspect Ratio |
|---|---|
| Standard plot | 4:3 (1.33) |
| Wide timeline | 3:1 |
| Square heatmap | 1:1 |
| Workflow diagram | 16:9 (1.78) |
| Tall schematic | 3:4 (0.75) |
# Target: 7" × 6" double-column figure with 2×2 layout
figure_width = 7.0 # inches
figure_height = 6.0
nrows, ncols = 2, 2
spacing = 0.3 # inches between panels
margin = 0.1 # inches around edge
# Calculate panel dimensions
panel_width = (figure_width - (ncols-1)*spacing - 2*margin) / ncols
# = (7.0 - 0.3 - 0.2) / 2 = 3.25 inches
panel_height = (figure_height - (nrows-1)*spacing - 2*margin) / nrows
# = (6.0 - 0.3 - 0.2) / 2 = 2.75 inches
# Generate each panel at this size:
# - PlotSpec: "figsize_in": [3.25, 2.75]
# - Schematic: specify "Aspect ratio: 1.18:1" (3.25/2.75)
Before composing, ensure each panel:
[ ] Generated at correct dimensions for target layout
[ ] Has consistent DPI (300+ for print)
[ ] Uses matching font sizes across all panels
[ ] Has appropriate margins (not too tight to edges)
[ ] White/transparent background (for seamless composition)
IMPORTANT: After generating any figure, Claude MUST validate the output quality before presenting to the user.
Run the validation script on generated outputs:
python skills/scientific-figures/scripts/validate_figure.py --outdir outputs/fig1/
After generating a figure, Claude should use the Read tool to view the image and check:
[ ] Image is not pixelated or blurry
[ ] Lines are crisp and well-defined
[ ] Text is sharp and readable
[ ] No compression artifacts
[ ] All axis labels present with units
[ ] All text is readable (not too small)
[ ] No overlapping text
[ ] Labels are not cut off at edges
[ ] Font is consistent (sans-serif)
[ ] Elements are well-balanced
[ ] Adequate whitespace
[ ] No elements extending beyond bounds
[ ] Proper alignment of multi-panel figures
[ ] Panel labels (A, B, C) visible and correctly positioned
[ ] Colors match specification
[ ] Sufficient contrast for readability
[ ] Colorblind-safe palette used
[ ] Background is clean (white, no artifacts)
[ ] All requested elements present
[ ] Data appears correctly plotted
[ ] Legend matches data
[ ] Scale/axes appropriate
1. GENERATE figure using appropriate script
2. READ the output image file to visually inspect
3. RUN validate_figure.py for automated checks
4. EVALUATE against checklist above
5. IF issues found:
- Identify specific problems
- Adjust parameters/spec
- RE-GENERATE
- REPEAT validation
6. ONLY present to user when quality confirmed
| Issue | Likely Cause | Fix |
|---|---|---|
| Text too small | Font size not scaled for figure size | Increase label_fontsize or font_scale |
| Blurry output | Low DPI | Set dpi: 600 for line art |
| Cut-off labels | Figure too small or tight bbox | Increase figsize or pad_inches |
| Overlapping legend | Auto-placement failed | Specify legend.loc explicitly |
| Misaligned panels | Inconsistent panel sizes | Pre-calculate sizes (see above) |
| Colors wrong | Palette not applied | Explicitly set palette in spec |
| Pixelated schematic | AI generated low-res | Request "high resolution" in prompt |
Is the figure publication-ready?
│
├─ Text readable at print size?
│ └─ NO → Increase font sizes, regenerate
│
├─ All elements present?
│ └─ NO → Update spec/prompt, regenerate
│
├─ Colors correct & accessible?
│ └─ NO → Fix palette specification, regenerate
│
├─ Layout balanced?
│ └─ NO → Adjust figsize or spacing, regenerate
│
├─ Resolution sufficient (300+ DPI)?
│ └─ NO → Increase DPI setting, regenerate
│
└─ ALL YES → Present to user
Create Figure 1 with: (A) overview schematic, (B) data plot, (C) results plot, (D) comparison
python skills/scientific-figures/scripts/generate_image.py \
"Create a publication-quality overview schematic showing machine learning pipeline for materials discovery. Horizontal flow: Data (left) → Features → Model → Predictions (right). Style: Nature journal, clean, white background, blue-orange palette." \
--output panels/fig1a_overview.png
Create fig1bcd.json:
{
"data": [{"id": "main", "path": "results.csv"}],
"figure": {
"layout": {"type": "grid", "nrows": 1, "ncols": 3, "figsize_in": [9, 3]},
"panels": [
{"id": "B", "kind": "scatter", "data_id": "main", "mapping": {"x": "predicted", "y": "actual"}, "labels": {"xlabel": "Predicted", "ylabel": "Actual"}},
{"id": "C", "kind": "bar", "data_id": "main", "mapping": {"x": "method", "y": "accuracy"}, "labels": {"xlabel": "Method", "ylabel": "Accuracy"}},
{"id": "D", "kind": "violin", "data_id": "main", "mapping": {"x": "group", "y": "error"}, "labels": {"xlabel": "Group", "ylabel": "Error"}}
]
},
"output": {"outdir": "panels", "basename": "fig1bcd", "formats": ["png"]}
}
python skills/scientific-figures/scripts/plot_cli.py --spec fig1bcd.json
python skills/scientific-figures/scripts/figure_compose.py \
--panels panels/fig1a_overview.png panels/fig1b.png panels/fig1c.png panels/fig1d.png \
--layout 2x2 \
--figsize 7 6 \
--output Figure1.pdf
python skills/scientific-figures/scripts/validate_figure.py --outdir panels/
| Target | Width | Use |
|---|---|---|
| Single column | 85mm (3.35in) | Most single panels |
| 1.5 column | 114mm (4.5in) | Wide panels |
| Double column | 170mm (6.7in) | Multi-panel figures |
Use colorblind-safe palettes:
colorblind palette| Format | Use |
|---|---|
| Primary vector (print) | |
| SVG | Editable vector |
| PNG | Raster (≥300 DPI) |
IMPORTANT: Every figure generated MUST have an accompanying description file saved alongside it. This is critical for:
For every figure, Claude MUST save:
| File | Content |
|---|---|
{basename}.png/pdf | The figure image |
{basename}_description.md | Full description (see template below) |
After generating any figure (plot, schematic, or composition), Claude MUST create a {basename}_description.md file with this structure:
# Figure: {Title}
## Caption (for manuscript)
{One paragraph suitable for a journal figure caption. Include: what is shown,
key visual encodings, sample sizes, statistical measures, and main finding.}
## Panel Descriptions
{For multi-panel figures, describe each panel A, B, C, D...}
### Panel A: {Title}
- **Type**: {plot type or "AI-generated schematic"}
- **Content**: {what is visualized}
- **X-axis**: {variable (units)}
- **Y-axis**: {variable (units)}
- **Color/Hue**: {what color encodes}
- **Key finding**: {main takeaway}
## Technical Details
- **Data source**: {filename or "AI-generated"}
- **Dimensions**: {width × height in inches}
- **Resolution**: {DPI}
- **Output formats**: {PDF, PNG, SVG}
## Prompt Used (for schematics only)
{The exact prompt used to generate AI schematics, for reproducibility}
# Figure: Panel B - Method Accuracy Comparison
## Caption (for manuscript)
Classification accuracy comparison across deep learning methods (Baseline, CNN,
Transformer, Hybrid) evaluated on ImageNet and CIFAR-100 datasets. Error bars
indicate 95% confidence intervals (n=3 independent runs per condition). The
Hybrid CNN-Transformer model achieves significantly higher accuracy than other
methods on both datasets.
## Panel Descriptions
### Panel B: Accuracy Bar Chart
- **Type**: Grouped bar plot with error bars
- **Content**: Mean accuracy per method and dataset
- **X-axis**: Method (categorical)
- **Y-axis**: Accuracy (0-1 scale)
- **Color/Hue**: Dataset (blue=ImageNet, orange=CIFAR-100)
- **Error bars**: 95% CI
- **Key finding**: Hybrid model achieves ~94% accuracy on ImageNet
## Technical Details
- **Data source**: results.csv (24 observations)
- **Dimensions**: 3.3 × 2.7 inches
- **Resolution**: 300 DPI
- **Output formats**: PNG
# Figure: Panel A - Model Architecture
## Caption (for manuscript)
Architecture of the Hybrid CNN-Transformer model for image classification,
showing data flow from input image (224×224 RGB) through CNN backbone
(3 convolutional blocks), patch embedding, transformer encoder (6 blocks
with multi-head self-attention), to classification head. Dashed line
indicates skip connection from CNN features to transformer output.
## Panel Descriptions
### Panel A: Architecture Schematic
- **Type**: AI-generated conceptual diagram
- **Content**: Neural network architecture visualization
- **Layout**: Vertical stack, bottom-to-top data flow
- **Color coding**: Blue (input), Gray (CNN), Orange (Transformer), Green (output)
- **Key elements**: CNN blocks, patch embedding, transformer encoder, skip connection
## Technical Details
- **Data source**: AI-generated (Gemini 3 Pro Image)
- **Dimensions**: 3.3 × 2.7 inches
- **Resolution**: Native AI output
- **Output formats**: PNG
## Prompt Used
Create a publication-quality architecture diagram showing a Hybrid CNN-Transformer
model for image classification. Layout: Vertical stack, bottom-to-top data flow...
[full prompt here]
# Figure 1: Hybrid CNN-Transformer Performance Analysis
## Caption (for manuscript)
**Figure 1. Performance analysis of the Hybrid CNN-Transformer architecture.**
(A) Model architecture showing CNN backbone, patch embedding, and transformer
encoder with skip connections. (B) Classification accuracy across methods on
ImageNet and CIFAR-100 (error bars: 95% CI, n=3). (C) F1 score distributions
showing improved consistency with advanced architectures. (D) Accuracy vs.
training time trade-off demonstrating the Hybrid model's efficiency advantage.
## Panel Descriptions
### Panel A: Architecture Schematic
[description...]
### Panel B: Accuracy Comparison
[description...]
### Panel C: F1 Distribution
[description...]
### Panel D: Efficiency Trade-off
[description...]
## Technical Details
- **Layout**: 2×2 grid
- **Dimensions**: 7 × 6 inches (double column)
- **Resolution**: 300 DPI
- **Output formats**: PDF, PNG
1. GENERATE figure (plot/schematic/composition)
2. VALIDATE quality (visual inspection + validate_figure.py)
3. WRITE description file ← MANDATORY, do not skip!
4. Present both image AND description to user
| File | Purpose |
|---|---|
references/plot_spec.md | PlotSpec JSON schema |
references/plot_catalog.md | All plot types with examples |
references/style_guide.md | Typography, colors, sizing |
references/schematic_guide.md | Schematic prompt templates |
references/qa_checklist.md | Quality assurance checklist |
assets/paper.mplstyle | Matplotlib style file |
assets/palettes.json | Color palette definitions |
# For AI-generated schematics
export OPENROUTER_AI_API_KEY="your_api_key"
# Core (for plots)
pip install pandas matplotlib seaborn
# For image composition
pip install pillow
# Optional (for Parquet files)
pip install pyarrow
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.
Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand colors or style guidelines, visual formatting, or company design standards apply.
Create beautiful visual art in .png and .pdf documents using design philosophy. You should use this skill when the user asks to create a poster, piece of art, design, or other static piece. Create original visual designs, never copying existing artists' work to avoid copyright violations.