Clone and configure the research codebase. Verify baseline reproduction.
Clones and configures research codebases, sets up environments, and verifies baseline reproduction.
/plugin marketplace add hdubey-debug/orion/plugin install hdubey-debug-orion@hdubey-debug/orionClone and configure the research codebase. Verify baseline reproduction.
/orion-setup <github-url> [--branch <branch>] [--verify-baseline]
/orion-setup "https://github.com/author/paper-code"
/orion-setup "https://github.com/author/paper-code" --verify-baseline
/orion-setup ./existing-codebase --local
IMPORTANT: This command MUST use Plan Mode. Create a plan first, get user approval, then execute.
When user invokes /orion-setup, follow this process:
Use EnterPlanMode, then if GitHub URL provided:
# Check repo structure via GitHub API or clone
If local path:
ls -la <path>
cat <path>/README.md
cat <path>/requirements.txt # or environment.yml
Look for environment indicators:
environment.yml → Condarequirements.txt → pippyproject.toml → pip/poetryDockerfile → Dockersetup.py → pip install## Codebase Setup Plan
### Repository
- URL: [GitHub URL]
- Branch: [branch]
- Will clone to: research/codebase/
### Environment Detection
- Type: [Conda/pip/Docker]
- File: [environment.yml / requirements.txt]
### Setup Steps
1. Clone repository
2. Create [Conda env / venv]
3. Install dependencies
4. Check for data/weights downloads
5. Verify baseline (if --verify-baseline)
### Potential Issues
- [Any detected issues from README]
- [GPU requirements]
- [Data download requirements]
### User Decisions
- Environment name: [suggestion]
- GPU required: [Yes/No]
Use ExitPlanMode to get approval.
git clone <url> research/codebase
cd research/codebase
git checkout <branch>
Record commit hash for reproducibility.
For Conda:
conda env create -f environment.yml -n orion-research
# Or
conda create -n orion-research python=3.10
conda activate orion-research
pip install -r requirements.txt
For pip/venv:
python -m venv research/venv
source research/venv/bin/activate
pip install -r requirements.txt
Look for download requirements:
grep -r "download" README.md
ls scripts/download*.sh
If found, inform user:
Data/weights download may be required:
- [Script or instructions found]
Download now? [Y/N]
Find and run evaluation:
# Find eval script
ls *.py | grep -E "(eval|test|infer)"
# Run baseline
python eval.py --config baseline.yaml --subset
Record baseline result:
{
"baseline": {
"method": "[method name]",
"score": [score],
"metric": "[metric]",
"commit": "[hash]",
"date": "[timestamp]"
}
}
git checkout -b orion/main # Our stable branch
Create research/skills/codebase.md:
# Codebase: [Repo Name]
## Repository
- URL: [URL]
- Commit: [hash]
- Branch: orion/main
## Structure
codebase/ ├── [key directories] └── [key files]
## Key Files
| File | Purpose |
|------|---------|
| [file] | [purpose] |
## How to Run
### Inference
```bash
python eval.py --config config.yaml
python train.py --config config.yaml
Key config options:
#### Step 11: Update orion.json
```json
{
"codebase": {
"url": "[url]",
"path": "research/codebase",
"commit": "[hash]",
"branch": "orion/main"
},
"baseline": {
"score": [score],
"metric": "[metric]"
}
}
Codebase Setup Complete!
Repository: [URL]
Location: research/codebase/
Commit: [hash]
Environment: [Conda/venv] - [name]
Baseline: [score] ([metric])
Branch structure:
- orion/main (stable baseline)
Next steps:
1. Generate hypotheses: /hypothesis-generation
2. Or view knowledge: /knowledge
Run /orion-setup after:
/literature-review (understand what code to get)/benchmark-setup (know what to evaluate on)Run before:
/hypothesis-generation (need codebase to implement)