From video
End-to-end orchestration guide for the ataraxis-video-system recording and analysis pipeline. Covers canonical phase ordering with handoff conditions, multi-camera planning with system ID allocation and DataLogger topology, and decision trees for interface, encoding, and processing configuration. Use when planning a full recording workflow, setting up multi-camera rigs, or deciding between MCP and code.
npx claudepluginhub sun-lab-nbb/ataraxis --plugin videoThis skill uses the workspace's default tool permissions.
End-to-end orchestration reference for camera recording and data analysis. Covers single-camera and
Applies Acme Corporation brand guidelines including colors, fonts, layouts, and messaging to generated PowerPoint, Excel, and PDF documents.
Builds DCF models with sensitivity analysis, Monte Carlo simulations, and scenario planning for investment valuation and risk assessment.
Calculates profitability (ROE, margins), liquidity (current ratio), leverage, efficiency, and valuation (P/E, EV/EBITDA) ratios from financial statements in CSV, JSON, text, or Excel for investment analysis.
End-to-end orchestration reference for camera recording and data analysis. Covers single-camera and multi-camera setups, phase ordering, handoff conditions, and decision guidance.
Covers:
Does not cover:
/mcp-environment-setup)Handoff rules: This skill dispatches to phase-specific skills at each stage. Always invoke the relevant skill for detailed tool usage, parameter reference, and troubleshooting.
Environment Camera Recording Post- Log Results
Setup → Discovery → → Recording → Processing → Analysis
| | | | | |
/mcp-env- /camera-setup /camera-setup /post- /log- /log-processing
setup or /camera- recording processing -results
interface
/mcp-environment-setupaxvs command availability, verify Python versioncheck_runtime_requirements returns OK for all needed
components/camera-setupcheck_runtime_requirements, list_cameras, configure CTI if Harvesters, inspect and
configure GenICam nodes/camera-setup (MCP) or /camera-interface (code)start_video_session → start_frame_saving → stop_frame_saving → stop_video_sessionstart_frame_saving →
stop_frame_saving → stop → logger stop → assemble_log_archives/post-recording/log-processing/log-processing-resultsDoes the camera support GenTL (GeniCam Transport Layer)?
YES → Harvesters (preferred interface; provides GenICam node control)
NO → Is the camera a USB webcam or consumer device?
YES → OpenCV
NO → Is this a test or development scenario without hardware?
YES → Mock
NO → Check camera vendor documentation for GenTL support
| Scenario | Recommendation |
|---|---|
| Single camera, interactive testing or exploration | MCP via /camera-setup |
| Single camera, production with custom encoding | Code via /camera-interface |
| Multi-camera simultaneous recording | Code via /camera-interface |
| Log processing (any scenario) | MCP via /log-processing |
| Results analysis (any scenario) | MCP via /log-processing-results |
MCP supports only one active video session at a time. Multi-camera recording requires Python code.
| Use Case | Encoder | Preset | Pixel Format | QP | GPU |
|---|---|---|---|---|---|
| Interactive testing | H264 | FAST (3) | YUV420 | 15 | -1 |
| Scientific imaging (high-speed) | H265 | SLOWEST (7) | YUV444 | 0-5 | 0 |
| Behavioral video (color) | H265 | SLOW (5) | YUV420 | 15-20 | 0 |
| Archival (storage-sensitive) | H265 | SLOWER (6) | YUV420 | 20-25 | 0 |
| Multi-camera rig (bandwidth) | H265 | FAST (3) | YUV420 | 15 | 0 |
These are healthy starting points. Actual parameters must be fine-tuned by the end user for their specific camera, scene content, and throughput requirements.
See /camera-interface for detailed encoding guidance and FFMPEG error interpretation. See /camera-setup
for MCP encoding parameter reference.
| Range | Assignment | Notes |
|---|---|---|
| 51-100 | Camera VideoSystem instances | One unique ID per camera; advised range for all camera code |
| 111 | CLI (axvs run) | Fixed; interactive testing only |
| 112 | MCP server sessions | Fixed; agent-driven testing only |
All other IDs are used by other production assets in the broader system. Camera code should stay within the 51-100 band. Allocate camera IDs sequentially starting at 51 (e.g., 51, 52, 53 for a 3-camera rig).
A single shared DataLogger is the preferred topology for all use cases:
DataLogger(instance_name="session")
├── VideoSystem(system_id=51, name="face_camera") → 051_log.npz + camera_manifest.yaml
├── VideoSystem(system_id=52, name="body_camera") → 052_log.npz
└── VideoSystem(system_id=53, name="arena_camera") → 053_log.npz
All cameras share one log directory, all timestamps are correlated, one assemble_log_archives call
consolidates everything, and one processing batch covers all source IDs. Each VideoSystem writes an
entry to camera_manifest.yaml during initialization, enabling manifest-based discovery downstream.
Multiple DataLoggers should only be used if a single logger cannot handle the load, leading to excessive buffering. This is extremely rare in practice. When it does occur, each DataLogger creates a separate output directory that must be assembled and processed independently, and cross-camera timestamp comparison requires merging data from separate directories.
The ordering of initialization and shutdown is critical for multi-camera setups:
Startup (in order):
1. DataLogger(s) → __init__() → start()
2. VideoSystem(s) → __init__() → start()
3. All VideoSystems → start_frame_saving()
Shutdown (reverse order):
4. All VideoSystems → stop_frame_saving()
5. VideoSystem(s) → stop()
6. DataLogger(s) → stop()
7. assemble_log_archives() for each DataLogger output directory
DataLogger.stop()from pathlib import Path
import numpy as np
from ataraxis_data_structures import DataLogger, assemble_log_archives
from ataraxis_video_system import CameraInterfaces, VideoSystem
session_directory = Path("/path/to/session")
# Starts the shared DataLogger first.
logger = DataLogger(output_directory=session_directory, instance_name="session")
logger.start()
# Initializes and starts each camera with a unique system ID and descriptive name.
cameras: list[VideoSystem] = []
camera_configs = [(51, 0, "face_camera"), (52, 1, "body_camera"), (53, 2, "arena_camera")]
for camera_id, camera_index, camera_name in camera_configs:
camera = VideoSystem(
system_id=np.uint8(camera_id),
data_logger=logger,
name=camera_name,
output_directory=session_directory,
camera_interface=CameraInterfaces.HARVESTERS,
camera_index=camera_index,
)
camera.start()
cameras.append(camera)
# Starts frame saving on all cameras.
for camera in cameras:
camera.start_frame_saving()
# ... recording ...
# Shuts down in reverse order.
for camera in cameras:
camera.stop_frame_saving()
for camera in cameras:
camera.stop()
logger.stop()
# Assembles archives after the DataLogger has fully stopped.
assemble_log_archives(log_directory=logger.output_directory, remove_sources=True)
All cameras sharing a DataLogger write to the same log directory and the same camera_manifest.yaml.
This simplifies batch processing:
discover_camera_data_tool finds the manifest and identifies all confirmed sources (e.g., 51, 52, 53)
with their camera names, log archives, video files, and feather outputs in one flat sources listprepare_log_processing_batch_tool creates one job per source ID (pass confirmed source_ids from discovery)camera_timestamps/ subdirectory
(camera_timestamps/camera_51_timestamps.feather, camera_timestamps/camera_52_timestamps.feather, etc.)For multi-DataLogger setups, process each DataLogger output directory as a separate batch.
After processing, use analyze_camera_frame_statistics_tool with all camera feather files (pass the
timestamps_file paths from discover_camera_data_tool as the feather_files list) and compare:
drop_rate_percent across cameras to identify bandwidth bottlenecks. If all
cameras drop simultaneously, the issue is system-wide (disk I/O, CPU, GPU saturation).frame_index ranges across cameras. Correlated
drops indicate system-level events; uncorrelated drops indicate per-camera issues.first_timestamp_us across cameras. The delta between the earliest
and latest first timestamps measures acquisition start synchronization quality./mcp-environment-setup — verify MCP connectivity (if first session)/camera-setup — list_cameras → start_video_session → test → stop_video_session/post-recording — verify video and archives/camera-setup — configure GenICam nodes, test with MCP session/camera-interface — write VideoSystem code with production encoding parameters/post-recording — verify video and archives/log-processing — extract timestamps/log-processing-results — analyze frame quality/camera-setup — discover all cameras, configure GenICam nodes individually/pipeline — plan system IDs and DataLogger topology/camera-interface — write multi-camera code following the coordinated lifecycle pattern/post-recording — verify all videos and archives/log-processing — batch process all source IDs together/log-processing-results — cross-camera comparison| Skill | Role |
|---|---|
/mcp-environment-setup | Phase 1: environment verification |
/camera-setup | Phase 2-3: MCP-based discovery, testing, and recording |
/camera-interface | Phase 3: code-based VideoSystem integration |
/post-recording | Phase 4: output verification and archive assembly |
/log-input-format | Reference: archive format for troubleshooting |
/log-processing | Phase 5: timestamp extraction |
/log-processing-results | Phase 6: frame statistics and quality analysis |
Pipeline Orchestration:
- [ ] Environment verified (MCP server connected, FFMPEG/GPU/CTI checked)
- [ ] Camera(s) discovered and configuration validated
- [ ] Interface decision made (MCP vs code, single vs multi-camera)
- [ ] System IDs allocated (unique per camera, 51-100 range)
- [ ] DataLogger topology decided (single vs multiple)
- [ ] Encoding parameters selected for use case
- [ ] Recording session completed (all cameras started and stopped in order)
- [ ] Post-recording verification passed (video + archives)
- [ ] Log processing completed (all source IDs processed)
- [ ] Frame statistics analyzed for all cameras
- [ ] Cross-camera comparison performed (if multi-camera)