From workflows
Use when working with jupytext — converting notebooks to/from text formats, syncing paired .ipynb/.py files, multi-kernel projects (Python/R/Stata/SAS), or executing notebooks via papermill.
npx claudepluginhub edwinhu/workflows --plugin workflowsThis skill uses the workspace's default tool permissions.
- [Execution Enforcement](#execution-enforcement)
Implements Playwright E2E testing patterns: Page Object Model, test organization, configuration, reporters, artifacts, and CI/CD integration for stable suites.
Guides Next.js 16+ Turbopack for faster dev via incremental bundling, FS caching, and HMR; covers webpack comparison, bundle analysis, and production builds.
Discovers and evaluates Laravel packages via LaraPlugins.io MCP. Searches by keyword/feature, filters by health score, Laravel/PHP compatibility; fetches details, metrics, and version history.
Jupytext converts Jupyter notebooks to/from text formats (.py, .R, .md), enabling version control and multi-kernel workflows.
Before claiming ANY jupytext script executed successfully, follow this sequence:
jupytext --to notebook --output - script.py | papermill - output.ipynbThis is non-negotiable. Skipping papermill execution is NOT HELPFUL — the user gets a notebook that fails on first run.
| Excuse | Reality | Do Instead |
|---|---|---|
| "I converted to ipynb, so it works" | Conversion ≠ execution | EXECUTE with papermill, not just convert |
| "The .py file looks correct" | Syntax correctness ≠ runtime correctness | RUN and CHECK outputs |
| "I'll let the user execute it" | You're passing broken code | VERIFY before claiming completion |
| "Just a conversion task, no execution needed" | User expects working notebook | EXECUTE to confirm it works |
"I can use jupyter nbconvert --execute" | Papermill has better error handling | USE the recommended papermill pipeline |
| "I'll save the intermediate ipynb first" | Creates clutter | USE the recommended pipeline (no intermediate files) |
| "Exit code 0 means success" | Papermill can succeed with errors in cells | CHECK output.ipynb for tracebacks |
Before EVERY "notebook works" claim:
Conversion:
Execution (MANDATORY):
jupytext --to notebook --output - script.py | papermill - output.ipynbOutput Verification:
Multi-Kernel Projects (if applicable):
Only after ALL checks pass:
Follow this sequence for EVERY jupytext task involving execution:
1. CONVERT → jupytext --to notebook --output -
2. EXECUTE → papermill - output.ipynb (with params if needed)
3. CHECK → Verify exit code and stderr
4. INSPECT → Use notebook-debug verification
5. VERIFY → Outputs match expectations
6. CLAIM → "Notebook works" only after all gates passed
NEVER skip execution gate. Converting without executing proves nothing about correctness.
Skipping papermill execution is NOT HELPFUL — the user gets a notebook that looks correct but fails when they run it.
This is not just format conversion - verify that the notebook executes correctly. The user expects a working notebook, not just syntactically valid code.
Use percent format (py:percent) for all projects:
# %% [markdown]
# # Analysis Title
# %%
import pandas as pd
df = pd.read_csv("data.csv")
# %% tags=["parameters"]
input_file = "data.csv"
Cell markers: # %% for code, # %% [markdown] for markdown.
Markdown dollar signs: Always wrap $ in backticks to prevent LaTeX rendering - # Cost: $50`` not # Cost: $50
Create jupytext.toml in project root:
formats = "ipynb,py:percent"
notebook_metadata_filter = "-all"
cell_metadata_filter = "-all"
# Convert notebook to percent-format Python file
jupytext --to py:percent notebook.ipynb
# Convert Python script to Jupyter notebook format
jupytext --to notebook script.py
# Enable bidirectional pairing to keep formats synchronized
jupytext --set-formats ipynb,py:percent notebook.ipynb
# Synchronize paired notebook and text file
jupytext --sync notebook.ipynb
Always pipe to papermill for execution - no intermediate files:
# Convert script to notebook and execute in atomic operation
jupytext --to notebook --output - script.py | papermill - output.ipynb
# Convert and execute with parameter injection
jupytext --to notebook --output - script.py | papermill - output.ipynb -p start_date "2024-01-01" -p n_samples 1000
# Convert and execute with detailed logging output
jupytext --to notebook --output - script.py | papermill - output.ipynb --log-output
# Convert and execute in memory without saving intermediate files
jupytext --to notebook --output - script.py | papermill - -
Key flags:
--output - tells jupytext to write to stdoutpapermill - output.ipynb reads from stdin, writes to filepapermill - - reads from stdin, writes to stdout (for inspection)Why this pattern:
.ipynb files cluttering the workspaceAfter execution, use notebook-debug skill to inspect tracebacks in the output ipynb.
Share data between Python/R/Stata/SAS via files:
| Route | Format | Write | Read |
|---|---|---|---|
| Python -> R | Parquet | df.to_parquet() | arrow::read_parquet() |
| Python -> Stata | DTA | df.to_stata() | use "file.dta" |
| Any -> Any | CSV | Native | Native |
| SQL queries | DuckDB | Query parquet directly | Query parquet directly |
Python (prep) -> Parquet -> R (stats) -> Parquet -> Python (report)
|
v
Stata (.dta) -> Econometrics
Add the following to .pre-commit-config.yaml:
repos:
- repo: https://github.com/mwouts/jupytext
rev: v1.16.0
hooks:
- id: jupytext
args: [--sync] # Synchronize paired formats before commit
Choose one approach:
*.ipynb to .gitignore) for minimal repository sizeConfigure editors for automatic synchronization:
Standard multi-kernel project layout:
project/
├── jupytext.toml # Project-wide settings
├── environment.yml # Conda env with all kernels
├── notebooks/
│ ├── 01_python_prep.py # Python percent format
│ ├── 02_r_analysis.R # R percent format
│ └── 03_stata_models.do # Stata script
├── data/
│ ├── raw/
│ └── processed/ # Parquet/DTA interchange files
└── results/
Specify kernel in file header:
# ---
# jupyter:
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %% [markdown]
# # Python Analysis
| Issue | Solution |
|---|---|
| Sync conflict | Delete .ipynb, regenerate from .py |
| Wrong kernel | Add kernelspec header to .py file |
| Metadata noise | Set notebook_metadata_filter = "-all" |
| Cell order lost | Use percent format (preserves structure) |
Detailed patterns and configurations:
references/formats.md - All format specifications (percent, light, sphinx, myst, rmd, quarto), cell metadata, configuration optionsreferences/kernels.md - Kernel setup (IRkernel, xeus-r, stata_kernel, pystata, saspy), environment configuration, troubleshootingreferences/data-sharing.md - Cross-kernel data sharing patterns (parquet, dta, csv, duckdb), full pipeline examples, validation patternsWorking code in examples/:
examples/python_analysis.py - Python percent-format template with common patternsexamples/r_analysis.R - R percent-format template for statistical analysisexamples/cross_kernel_pipeline.py - Multi-kernel data sharing exampleUtility scripts in scripts/:
scripts/init_project.sh - Initialize jupytext project with standard structurescripts/sync_all.sh - Sync all paired notebooks in project