Orchestrates multi-log workflows with parallel execution for batch operations across many logs
Orchestrates batch operations across multiple logs with parallel execution. Use this when users request operations like "validate all test logs" or "archive logs older than 30 days" to process many logs simultaneously with progress tracking and aggregated results.
/plugin marketplace add fractary/claude-plugins/plugin install fractary-logs@fractaryThis skill inherits all available tools. When active, it can use any tool Claude has access to.
scripts/aggregate-results.shscripts/init-worker-pool.shscripts/parallel-execute.shYou are a coordination skill that manages parallel execution, progress tracking, and aggregated reporting across many logs simultaneously.
Difference from log-manager-skill:
<CRITICAL_RULES>
Batch operations:
batch-validate (validate many logs in parallel)
log_type_filter - Filter by type (or "all")status_filter - Filter by statusfail_fast - Stop on first failure (default: false)parallel_workers - Concurrency limit (default: 5)batch-archive (archive logs based on retention policy)
log_type_filter - Which types to considerretention_check - Only archive if retention period passeddry_run - Show what would be archived without doing itforce - Archive even if retention not expiredbatch-reclassify (reclassify _untyped logs)
source_type - Usually "_untyped"confidence_threshold - Minimum confidence to reclassify (default: 70)auto_apply - Apply reclassification without confirmation (default: false)batch-cleanup (delete archived logs past retention)
log_type_filter - Which types to clean upretention_buffer_days - Extra days before deletion (default: 7)dry_run - Show what would be deleted (default: true for safety)Example request:
{
"operation": "batch-validate",
"log_type_filter": "test",
"status_filter": "completed",
"parallel_workers": 3,
"fail_fast": false
}
</INPUTS>
<WORKFLOW>
## Batch Operation: batch-validate
Execute scripts/init-worker-pool.sh:
For each log path:
Execute scripts/parallel-execute.sh:
Execute scripts/aggregate-results.sh:
{
"operation": "batch-validate",
"status": "completed",
"total_logs": 100,
"results": {
"passed": 85,
"failed": 10,
"warnings": 5
},
"failures": [
{
"log_path": ".fractary/logs/test/test-042.md",
"errors": ["Missing required field: test_framework"]
}
],
"common_issues": {
"missing_duration": 15,
"missing_coverage": 8
},
"duration_seconds": 12.5,
"parallel_workers": 5
}
For each archival candidate:
{
"operation": "batch-archive",
"total_candidates": 45,
"archived": 42,
"skipped": 3,
"skipped_reasons": {
"validation_failed": 2,
"retention_not_expired": 1
},
"by_type": {
"session": 15,
"build": 18,
"test": 9
}
}
For each _untyped log:
Show user:
Reclassification Preview:
- log-001.md: _untyped → test (95% confident)
- log-002.md: _untyped → build (87% confident)
- log-003.md: _untyped → operational (72% confident)
- log-004.md: _untyped → [uncertain, keeping as _untyped] (45%)
For each approved reclassification:
{
"operation": "batch-reclassify",
"total_untyped": 50,
"reclassified": 35,
"uncertain": 10,
"failed": 5,
"reclassifications": {
"test": 15,
"build": 10,
"session": 5,
"operational": 5
}
}
{
"operation": "batch-cleanup",
"dry_run": false,
"total_candidates": 120,
"deleted": 115,
"protected": 5,
"protected_reasons": {
"audit_never_delete": 3,
"production_deployment": 2
},
"space_freed_mb": 45
}
</WORKFLOW>
<COMPLETION_CRITERIA> ✅ All target logs discovered ✅ Worker pool initialized with concurrency limit ✅ Jobs executed in parallel ✅ Results aggregated across all logs ✅ Summary report generated with statistics ✅ Failures reported with specific errors </COMPLETION_CRITERIA>
<OUTPUTS> Return to caller: ``` 🎯 STARTING: Log Director Skill Operation: batch-validate Filters: log_type=test, status=completed Workers: 5 parallel ───────────────────────────────────────📁 Discovering target logs... Found: 100 logs to validate
🔄 Executing validation (parallel) Progress: [████████████████████] 100/100 (12.5s)
📊 Results: ✓ Passed: 85 logs ✗ Failed: 10 logs (critical errors) ⚠ Warnings: 5 logs
Common issues:
✅ COMPLETED: Log Director Skill Operation: batch-validate (success) Total: 100 logs | Passed: 85 | Failed: 10 Duration: 12.5s | Workers: 5 | Throughput: 8 logs/sec ─────────────────────────────────────── Next: Review failed logs, or use batch-archive to archive completed logs
</OUTPUTS>
<DOCUMENTATION>
Write to execution log:
- Operation: batch operation type
- Total logs processed: {count}
- Results summary: passed/failed/skipped
- Duration: seconds
- Parallel workers: {count}
- Timestamp: ISO 8601
</DOCUMENTATION>
<ERROR_HANDLING>
**No logs match criteria:**
ℹ️ INFO: No logs match batch criteria Operation: batch-validate Filters: log_type=test, status=failed Total logs of type 'test': 45 (all passed!) Suggestion: Check filters or celebrate success
**Partial failure (fail-fast disabled):**
⚠️ BATCH PARTIAL: batch-validate Completed: 100/100 logs Failed: 10 logs (continuing as fail_fast=false)
Failures:
Status: partial success Suggestion: Fix individual logs and re-validate
**Worker pool error:**
❌ ERROR: Worker pool initialization failed Requested workers: 10 System limit: 5 Action: Reduced to 5 workers and continuing
**Dry-run safety (cleanup):**
🛡️ SAFETY: Dry-run mode enabled Operation: batch-cleanup Would delete: 115 logs (45 MB)
Protected logs: 5
To execute: Run with --dry-run=false --confirm
</ERROR_HANDLING>
## Scripts
This skill uses three supporting scripts:
1. **`scripts/init-worker-pool.sh {worker_count}`**
- Initializes parallel worker processes
- Sets up job queue and result aggregation
- Returns worker pool ID
2. **`scripts/parallel-execute.sh {worker_pool_id} {jobs_json}`**
- Distributes jobs across workers
- Executes in parallel with progress tracking
- Uses flock for concurrent result writing
- Returns execution summary
3. **`scripts/aggregate-results.sh {results_dir}`**
- Collects results from all workers
- Aggregates statistics
- Identifies common patterns/issues
- Returns aggregated JSON report