From iowarp-dev-setup
Sets up IOWarp Core dev environment via Docker devcontainers (recommended), native Linux builds, VS Code config, CMake compilation, ctest runs, and runtime verification.
npx claudepluginhub iowarp/clio-core --plugin iowarp-dev-setupThis skill uses the workspace's default tool permissions.
You are helping a developer get IOWarp Core running locally. Follow the instructions below precisely. Adapt to the developer's OS and tooling, but always prefer the Docker devcontainer path unless they explicitly ask otherwise.
Enforces IOWarp/Clio contributing guidelines: git workflow, commit/PR standards, C++ style (Google guide, clang-format), for clio-core and related repos.
Sets up local dev loop for CoreWeave GPU deployments: build/push Docker images, dry-run kubectl YAML manifests, deploy to CKS cluster.
Guides Docker usage: debugging container failures, writing Dockerfiles, docker-compose for integration tests, image optimization, volumes, multi-stage builds, and deployments.
Share bugs, ideas, or general feedback.
You are helping a developer get IOWarp Core running locally. Follow the instructions below precisely. Adapt to the developer's OS and tooling, but always prefer the Docker devcontainer path unless they explicitly ask otherwise.
Ask the developer which setup path they need:
The developer needs:
ms-vscode-remote.remote-containers)For GPU development, they additionally need:
nvidia-ctk)git clone --recurse-submodules https://github.com/iowarp/clio-core.git
cd clio-core
If already cloned without submodules:
git submodule update --init --recursive
VS Code / Cursor:
clio-core folderManual container start (headless / CLI-only):
cd .devcontainer/cpu
docker build --build-arg HOST_UID=$(id -u) --build-arg HOST_GID=$(id -g) \
-t iowarp/clio-core-devcontainer:latest -f Dockerfile ../..
docker run -it --privileged \
-v $(pwd)/../..:/workspace \
-v /var/run/docker.sock:/var/run/docker.sock \
iowarp/clio-core-devcontainer:latest
Inside the container:
cmake --preset=debug
cmake --build build -j$(nproc)
For GPU builds (nvidia-gpu container only):
cmake --preset=cuda-debug
cmake --build build -j$(nproc)
cd build && ctest -VV
Component-specific tests:
ctest -R context_transport # Transport primitives
ctest -R chimaera # Runtime
ctest -R cte # Context Transfer Engine
ctest -R omni # Context Assimilation Engine
# Start runtime with default config
export CHI_SERVER_CONF=/workspace/docker/wrp_cte_bench/cte_config.yaml
chimaera runtime start &
# After a moment, run a quick benchmark
wrp_run_thrpt_benchmark --test-case latency --threads 4 --duration 5
The devcontainer uses a layered Docker image:
iowarp/iowarp-base:latest ← Ubuntu 24.04 + base tools + iowarp user
└─ iowarp/deps-cpu:latest ← All build deps (apt + source-built)
└─ devcontainer ← Claude Code, UID remapping, SSH/Claude config forwarding
What's pre-installed in the container:
Key paths inside the container:
| Path | Purpose |
|---|---|
/workspace | Bind-mounted source tree |
/workspace/build | Build output directory (NEVER build elsewhere) |
/home/iowarp/venv | Python virtual environment (auto-activated) |
/usr/local | Source-built deps (yaml-cpp, zmq, cereal, etc.) |
/usr | Apt-installed deps (boost, hdf5, compression libs) |
The devcontainer automatically forwards from the host:
~/.ssh → SSH keys for git operations~/.claude → Claude Code configuration and auth~/.claude.json → Claude Code credentials/var/run/docker.sock → Docker-in-Docker access| Container | Location | Use Case | Build Time | Size |
|---|---|---|---|---|
| CPU-only | .devcontainer/cpu/ | General development | ~5-10 min | ~3 GB |
| NVIDIA GPU | .devcontainer/nvidia-gpu/ | CUDA kernel dev | ~15-20 min | ~8 GB |
git clone --recurse-submodules https://github.com/iowarp/clio-core.git
cd clio-core
bash install.sh release
This installs Miniconda, rattler-build, and IOWarp in one script. Variants are stored in installers/conda/variants/ — create a new one for custom machine configs.
Install dependencies (Ubuntu 24.04):
sudo apt-get install -y cmake ninja-build g++ pkg-config \
libboost-all-dev libhdf5-dev libyaml-cpp-dev libzmq3-dev \
libcereal-dev catch2 libaio-dev liburing-dev \
zlib1g-dev libbz2-dev liblzo2-dev libzstd-dev liblz4-dev \
liblzma-dev libbrotli-dev libsnappy-dev libblosc2-dev
Build:
cmake --preset=debug
cmake --build build -j$(nproc)
Install (requires sudo):
sudo cmake --install build
"CMake can't find yaml-cpp/zeromq/cereal":
ldconfig and retry./usr/local/lib is in LD_LIBRARY_PATH."Build artifacts in source tree":
# CRITICAL: Never build in the source tree. Clean it:
find . -name "CMakeCache.txt" -delete
find . -name "CMakeFiles" -type d -exec rm -rf {} + 2>/dev/null || true
find . -name "Makefile" -delete
# Then rebuild properly:
cmake --preset=debug
cmake --build build -j$(nproc)
Compilation errors after code changes:
# Always reinstall before testing (RPATHs, not LD_LIBRARY_PATH):
cd build && sudo cmake --install . && ctest -VV
Container fails to start:
docker infodocker images | grep iowarpPermission denied on files:
docker build --build-arg HOST_UID=$(id -u) --build-arg HOST_GID=$(id -g) ...Docker-in-Docker not working:
sudo chmod 666 /var/run/docker.sock
SSH keys not available:
~/.ssh exists on the host before opening the containerinitializeCommand creates it if missing, but keys must exist"nvidia-smi: command not found" inside container:
.devcontainer/install-nvidia-container-toolkit.shsudo systemctl restart dockerdocker run --rm --gpus all nvidia/cuda:12.6.0-base-ubuntu24.04 nvidia-smiCUDA not detected by CMake:
echo $CUDA_HOME # Should be /usr/local/cuda-12.6
nvcc --version # Should show CUDA 12.6
cmake --preset=cuda-debug
Shared memory errors / stale IPC segments:
# Chimaera auto-cleans on init, but for manual cleanup:
rm -rf /tmp/chimaera_$(whoami)/*
Runtime won't start:
echo $CHI_SERVER_CONFlsof -i :9413| Preset | Use |
|---|---|
debug | Standard CPU debug (all features) |
cuda-debug | CUDA support (arch 86) |
rocm-debug | AMD ROCm GPUs |
debug-adios | ADIOS2 integration |
Key CMake flags:
-DWRP_CORE_ENABLE_RUNTIME=ON # Chimaera runtime
-DWRP_CORE_ENABLE_CTE=ON # Context Transfer Engine
-DWRP_CORE_ENABLE_CAE=ON # Context Assimilation Engine
-DWRP_CORE_ENABLE_CEE=ON # Context Exploration Engine
-DWRP_CORE_ENABLE_TESTS=ON # Enable test targets
-DWRP_CORE_ENABLE_COMPRESS=ON # Compression support
-DWRP_CORE_ENABLE_PYTHON=ON # Python bindings
-DWRP_CORE_ENABLE_ASAN=ON # AddressSanitizer
# Clone
git clone --recurse-submodules https://github.com/iowarp/clio-core.git
# Build (inside container)
cmake --preset=debug && cmake --build build -j$(nproc)
# Test
cd build && ctest -VV
# Start runtime
export CHI_SERVER_CONF=/workspace/docker/wrp_cte_bench/cte_config.yaml
chimaera runtime start
# IPC transport modes
export CHI_IPC_MODE=SHM # Shared memory (lowest latency, same machine)
export CHI_IPC_MODE=TCP # TCP via ZeroMQ (default, cross-machine)
export CHI_IPC_MODE=IPC # Unix domain socket (same machine, no TCP overhead)