From compound-science
Execute research implementation plans efficiently while maintaining estimation quality and finishing features
npx claudepluginhub james-traina/science-plugins --plugin compound-scienceThis skill is limited to using the following tools:
**Pipeline mode:** This command operates fully autonomously. All decisions are made automatically.
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Guides Payload CMS config (payload.config.ts), collections, fields, hooks, access control, APIs. Debugs validation errors, security, relationships, queries, transactions, hook behavior.
Pipeline mode: This command operates fully autonomously. All decisions are made automatically.
Execute a research implementation plan systematically. The focus is on shipping complete, reproducible research code by understanding requirements quickly, following existing patterns, and maintaining estimation quality throughout.
<input_document> #$ARGUMENTS </input_document>
If no input document is provided: Look for the most recent plan in docs/plans/ and use it. If no plans exist, state "No plan found. Run /workflows:plan first." and stop.
Read Plan
Setup Environment
First, detect the project environment:
# Detect estimation language
if [ -f "requirements.txt" ] || [ -f "setup.py" ] || [ -f "pyproject.toml" ]; then
echo "LANG=python"
elif [ -f "DESCRIPTION" ] || [ -f "renv.lock" ] || [ -f ".Rprofile" ]; then
echo "LANG=R"
elif [ -f "Project.toml" ]; then
echo "LANG=julia"
elif ls *.do >/dev/null 2>&1; then
echo "LANG=stata"
fi
# Detect pipeline tools
ls Makefile Snakefile dvc.yaml 2>/dev/null
Then check the current branch:
current_branch=$(git branch --show-current)
default_branch=$(git symbolic-ref refs/remotes/origin/HEAD 2>/dev/null | sed 's@^refs/remotes/origin/@@')
if [ -z "$default_branch" ]; then
default_branch=$(git rev-parse --verify origin/main >/dev/null 2>&1 && echo "main" || echo "master")
fi
If already on a feature branch (not the default branch):
If on the default branch:
Option A: Create a new branch (default)
git pull origin $default_branch
git checkout -b <branch-name-from-plan>
Use a meaningful name derived from the plan (e.g., feat/callaway-santanna-did, fix/blp-convergence).
Option B: Use a worktree (for parallel estimation runs)
See references/worktree-patterns.md if the plan involves parallel workstreams or the user has multiple active branches.
Automatically choose Option A unless the plan explicitly mentions parallel workstreams.
Activate Research Environment
Read compound-science.local.md for environment configuration. Then activate:
Python:
# Activate virtual environment
if [ -d ".venv" ]; then source .venv/bin/activate
elif [ -d "venv" ]; then source venv/bin/activate
elif command -v conda &>/dev/null; then conda activate $(basename $PWD)
fi
# Verify key packages
python -c "import numpy, scipy, pandas; print('Core packages OK')"
R:
# Check renv status
Rscript -e "if (file.exists('renv.lock')) renv::status()"
Verify data paths:
# Check that referenced data files exist
ls data/ 2>/dev/null | head -5
Create Task List
Task Execution Loop
For each task in priority order:
while (tasks remain):
- Mark task as in_progress in TodoWrite
- Read any referenced files from the plan
- Look for similar patterns in codebase
- Implement following existing conventions
- Write tests for new functionality
- Run Estimation Quality Check (see below)
- Run tests after changes
- Mark task as completed in TodoWrite
- Mark off the corresponding checkbox in the plan file ([ ] → [x])
- Evaluate for incremental commit (see below)
Estimation Quality Check — Before marking an estimation task done:
| Check | What to verify |
|---|---|
| Convergence | Did the optimizer converge? Check exit flag, gradient norm, iteration count. Multiple starting values yield consistent results? |
| Sensible estimates | Are parameter signs correct? Magnitudes economically reasonable? No values at boundary constraints? |
| Standard errors | Computed with appropriate method (robust, clustered, bootstrap)? Positive definite Hessian? No suspiciously small or large SEs? |
| Diagnostics | First-stage F > 10 (if IV)? Overidentification test (if overidentified)? Hausman or specification tests where relevant? |
| Numerical stability | Log-likelihood (not likelihood) used? Condition number of key matrices acceptable? No NaN/Inf in outputs? |
| Reproducibility | Random seeds set? Results identical across runs? Dependencies pinned? |
When to skip: Pure data cleaning, documentation updates, or pipeline configuration changes that don't involve estimation. If the task is purely additive (new utility function, data loading), the check takes 10 seconds and the answer is "no estimation, skip."
When this matters most: Any change that touches estimation routines, moment conditions, likelihood functions, or simulation code.
IMPORTANT: Always update the original plan document by checking off completed items. Use the Edit tool to change - [ ] to - [x] for each task you finish.
Incremental Commits
After completing each task, evaluate whether to create an incremental commit:
| Commit when... | Don't commit when... |
|---|---|
| Estimation step complete with verified convergence | Partial estimation code that won't run |
| Data pipeline stage verified | Incomplete data transformation |
| Tests pass + meaningful progress | Tests failing |
| About to switch contexts (data work → estimation) | Purely scaffolding with no behavior |
| Robustness check complete | Would need a "WIP" commit message |
Heuristic: "Can I write a commit message that describes a complete, verifiable change? If yes, commit."
Commit workflow:
# 1. Verify tests pass (use project's test command)
# Examples: pytest, Rscript tests/run_tests.R, etc.
# 2. Stage only files related to this logical unit
git add <files related to this logical unit>
# 3. Commit with conventional message
git commit -m "feat(estimation): description of this unit"
Note: Incremental commits use clean conventional messages. The final Phase 4 commit/PR includes full attribution.
Follow Existing Patterns
Test Continuously
Track Progress
Run Core Quality Checks
Always run before submitting:
# Run full test suite
# Python: pytest
# R: Rscript tests/run_tests.R or testthat::test_dir("tests")
# Run linting (per CLAUDE.md)
# Python: ruff check . or flake8
# R: lintr::lint_dir()
Estimation-Specific Validation
For any work involving estimation:
Consider Reviewer Agents (Optional)
Use for complex or risky changes. Read agents from compound-science.local.md frontmatter (review_agents). If no settings file, create one following the template in workflows-review/references/project-config.md.
Run configured agents in parallel with Task tool. Address critical issues before proceeding.
Default agents for estimation work:
econometric-reviewer — identification and inference reviewnumerical-auditor — numerical stability and convergenceidentification-critic — identification argument completenessFinal Validation
Create Commit
git add <relevant files>
git status # Review what's being committed
git diff --staged # Check the changes
git commit -m "$(cat <<'EOF'
feat(estimation): description of what and why
Brief explanation if needed.
Co-Authored-By: Claude <noreply@anthropic.com>
EOF
)"
Create Pull Request
git push -u origin <branch-name>
gh pr create --title "feat(estimation): [Description]" --body "$(cat <<'EOF'
## Summary
- What was implemented
- Methodological approach and key decisions
- Estimation results summary (if applicable)
## Estimation Quality
- Convergence: [status]
- Diagnostics: [first-stage F, overid test, specification tests]
- Robustness: [alternative specifications checked]
## Testing
- Tests added/modified
- Estimation verified with [approach]
## Reproducibility
- Random seeds: [set/documented]
- Pipeline: [runs end-to-end / specific steps]
- Dependencies: [pinned in requirements.txt/renv.lock]
## Research Impact
- Identification: [any changes to assumptions]
- Estimation: [computational cost, convergence]
- Robustness: [new checks added/updated]
- Replication: [package changes]
EOF
)"
Update Plan Status
If the input document has YAML frontmatter with a status field, update it:
status: active → status: completed
Summary
Pipeline mode (when invoked from /lfg or /slfg):
/workflows:review on the files that were changedStandalone mode (when invoked directly by the user):
/workflows:review in this sessionFor complex plans with multiple independent workstreams, enable swarm mode for parallel execution.
| Use Swarm Mode when... | Use Standard Mode when... |
|---|---|
| Plan has independent estimation specifications | Single estimation pipeline |
| Multiple robustness checks can run in parallel | Sequential estimation steps |
| Monte Carlo with independent DGP variants | Simple parameter change |
| Large replication package with separable components | Small feature or bug fix |
To trigger swarm execution, say:
"Make a Task list and launch an army of agent swarm subagents to build the plan"
See references/orchestration-patterns.md in the slfg skill for detailed swarm patterns and best practices.
Before creating PR, verify:
/workflows:review — review the implementation/workflows:compound — document solutions discovered during implementation