Prepare organized packages of project files for sharing at different levels - from summary PDFs to fully reproducible archives. Creates copies with cleaned notebooks, documentation, and appropriate file selection. After creating sharing package, all work continues in the main project directory.
npx claudepluginhub joshuarweaver/cascade-ai-ml-engineering --plugin delphine-l-claude-globalThis skill uses the workspace's default tool permissions.
Expert guidance for preparing project outputs for sharing with collaborators, reviewers, or repositories. Creates organized packages at different sharing levels while preserving your working directory.
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Expert guidance for preparing project outputs for sharing with collaborators, reviewers, or repositories. Creates organized packages at different sharing levels while preserving your working directory.
Supporting files in this directory:
Purpose: Quick sharing for presentations, reports, or high-level review
What to include:
Use when:
Structure:
shared-summary/
├── README.md # Brief overview
├── analysis-YYYY-MM-DD.pdf # Notebook as PDF
└── results/
├── figures/
│ ├── fig1-main-result.png
│ └── fig2-comparison.png
└── tables/
└── summary-statistics.csv
Purpose: Enable others to reproduce your analysis from processed data
What to include:
Use when:
Structure:
For standard project structures, see the folder-organization skill. Reproducible packages should include:
data/processed/)notebooks/) with outputs clearedscripts/)environment.yml or requirements.txt)README.md, MANIFEST.md)shared-reproducible/
├── README.md # Setup and reproduction instructions
├── MANIFEST.md # File descriptions
├── environment.yml # Conda environment OR requirements.txt
├── notebooks/ # Cleaned notebooks
├── scripts/ # Standalone scripts
└── data/
└── processed/ # Analysis-ready data
Purpose: Complete transparency from raw data through all processing steps
What to include:
Use when:
Structure:
For standard project structures, see the folder-organization skill. Full traceability packages should include complete data hierarchy:
shared-complete/
├── README.md # Complete project guide
├── MANIFEST.md # Comprehensive file listing
├── environment.yml
├── data/
│ ├── raw/ # Original, unmodified data
│ ├── intermediate/ # Processing steps
│ └── processed/ # Final analysis-ready
├── scripts/ # All processing scripts
├── notebooks/ # All notebooks (exploratory + final)
├── results/ # All outputs
│ ├── figures/
│ ├── tables/
│ └── supplementary/
└── documentation/ # Complete documentation
├── methods.md
├── changelog.md
└── data-dictionary.md
Questions to determine level:
Which sharing level do you need?
1. Summary Only - PDF + final results (quick sharing)
2. Reproducible - Notebooks + scripts + data (standard sharing)
3. Full Traceability - Everything from raw data (archival/compliance)
Additional questions:
- Who is the audience? (colleagues, reviewers, public)
- Are there size constraints?
- Any sensitive data to handle?
- Timeline for sharing?
Level 1 - Summary:
Level 2 - Reproducible:
Level 3 - Full:
# Create dated directory
SHARE_DIR="shared-$(date +%Y%m%d)-[level]"
mkdir -p "$SHARE_DIR"
For notebooks (.ipynb):
import nbformat
from nbconvert.preprocessors import ClearOutputPreprocessor
def clean_notebook(input_path, output_path):
"""Clean notebook: clear outputs, remove debug cells."""
with open(input_path, 'r') as f:
nb = nbformat.read(f, as_version=4)
clear_output = ClearOutputPreprocessor()
nb, _ = clear_output.preprocess(nb, {})
nb.cells = [cell for cell in nb.cells
if 'debug' not in cell.metadata.get('tags', [])
and 'remove' not in cell.metadata.get('tags', [])]
with open(output_path, 'w') as f:
nbformat.write(nb, f)
For data files: Copy as-is for small files; compress large files; check for sensitive information.
For scripts: Remove debugging code; add docstrings if missing; ensure paths are relative.
For notebook streamlining and the abridge option, see notebook-streamlining.md.
Problem: Notebooks and scripts with broken file paths will fail when shared.
For complete path verification procedures, automated checking scripts, and correction patterns, see the folder-organization skill.
| Breaks when shared | Works when shared |
|---|---|
/Users/yourname/project/data.csv | data/data.csv |
C:\Users\yourname\project\fig.png | figures/fig.png |
/absolute/path/to/results/ | results/ |
Quick check commands:
# Check for absolute paths in notebooks
grep -l "/Users/" *.ipynb
grep -l "C:\\\\" *.ipynb
# Project: [Project Name]
**Date:** YYYY-MM-DD
**Author:** [Your Name]
**Sharing Level:** [Summary/Reproducible/Full]
## Overview
Brief description of the project and analysis.
## Contents
See MANIFEST.md for detailed file descriptions.
## Requirements
[For Reproducible/Full levels]
- Python 3.X
- See environment.yml for dependencies
## Setup
\`\`\`bash
conda env create -f environment.yml
conda activate project-name
\`\`\`
## Reproduction Steps
[For Reproducible/Full levels]
1. [Description of first step]
## Data Sources
[For Full level]
- Dataset A: [Source, download date, version]
## Contact
[Your email or preferred contact]
## License
[If applicable - e.g., CC BY 4.0, MIT]
# File Manifest
Generated: YYYY-MM-DD
## File Descriptions
### Notebooks
- \`notebooks/01-data-processing.ipynb\` - Initial data loading and cleaning
- \`notebooks/02-analysis.ipynb\` - Main statistical analysis
### Data
- \`data/processed/cleaned_data.csv\` - Quality-controlled dataset (N=XXX samples)
### Scripts
- \`scripts/generate_figures.py\` - Automated figure generation
### Results
- \`results/figures/fig1-main.png\` - Main result showing [description]
Check for: PII, access credentials, proprietary data, institutional restrictions, patient/subject identifiers.
Strategies:
# For smaller packages (<100MB)
zip -r shared-YYYYMMDD.zip shared-YYYYMMDD/
# For larger packages
tar -czf shared-YYYYMMDD.tar.gz shared-YYYYMMDD/
IMPORTANT: After creating the sharing package, always work in the main project directory.
The sharing folder is a snapshot for distribution only. Any future development, analysis, or modifications should happen in your original working directory.
cd /path/to/main/project # Return to working directory
pwd # Verify location
# Continue work here, NOT in shared-YYYYMMDD/
Works well with:
Before using this skill:
After using this skill:
For quality assurance procedures and checklists, see quality-assurance.md. For common sharing scenarios and example scripts, see common-scenarios.md. For handling cleanup mistakes and notebook deprecation, see cleanup-and-deprecation.md.
Key principles for project sharing:
Remember: Good sharing practices benefit both collaborators and your future self!
After creating any sharing package:
shared-*/ directories - These are read-only snapshotsIf the user asks to modify files, always check the current directory and ensure you're working in the main project location, not in a sharing package.