Run a multi-campaign optimization — decompose a broad goal into focused campaigns, dispatch them in parallel via subagents, monitor progress, and synthesize results. Use when optimizing multiple aspects of a module or exploring a hypothesis from multiple angles.
From interlabnpx claudepluginhub mistakeknot/interagency-marketplace --plugin interlabThis skill uses the workspace's default tool permissions.
Guides Payload CMS config (payload.config.ts), collections, fields, hooks, access control, APIs. Debugs validation errors, security, relationships, queries, transactions, hook behavior.
Designs, audits, and improves analytics tracking systems using Signal Quality Index for reliable, decision-ready data in marketing, product, and growth.
Enforces A/B test setup with gates for hypothesis locking, metrics definition, sample size calculation, assumptions checks, and execution readiness before implementation.
Decompose a broad optimization goal into focused campaigns, dispatch them in parallel via subagents, monitor progress, and synthesize results.
Announce at start: "I'm using the autoresearch-multi skill to orchestrate a multi-campaign optimization."
Use this skill when a single /autoresearch loop is insufficient because the goal spans multiple dimensions. Examples:
Do NOT use this skill for simple, single-metric optimization — use /autoresearch directly instead.
The interlab MCP tools must be available:
plan_campaigns, dispatch_campaigns, status_campaigns, synthesize_campaignsinit_experiment, run_experiment, log_experimentVerify with a quick mental check: can you see these tools in your tool list? If not, the interlab plugin is not loaded — stop and tell the user.
/autoresearch loops)Read the codebase and identify optimization targets.
Ask the user (or infer from context):
Read the codebase to identify independent optimization dimensions:
Create interlab-multi.md in the working directory:
# interlab-multi: <broad goal>
## Objective
<what we're optimizing across multiple campaigns and why>
## Campaigns
| # | Name | Metric | Direction | Status | Best |
|---|------|--------|-----------|--------|------|
| 1 | <name> | <metric> | lower/higher | planned | — |
| 2 | <name> | <metric> | lower/higher | planned | — |
## File Ownership
- **Campaign 1**: <files>
- **Campaign 2**: <files>
- **Shared (conflict zone)**: <files needing coordination>
## Global Constraints
<hard rules that apply across all campaigns>
## Progress Log
<updated as campaigns dispatch, complete, or produce insights>
Prepare a decomposition JSON and call plan_campaigns:
{
"goal": "<broad optimization goal>",
"campaigns": [
{
"name": "<campaign-name>",
"metric_name": "<primary metric>",
"metric_unit": "<unit>",
"direction": "lower_is_better | higher_is_better",
"benchmark_command": "<command>",
"files_in_scope": ["<file1>", "<file2>"],
"constraints": ["<constraint1>"]
}
]
}
Design campaigns so that:
If two campaigns need to modify the same file:
depends_on the other, so they run sequentiallyDocument the resolution in interlab-multi.md under "File Ownership".
For each campaign, write (or verify) a benchmark script that outputs METRIC name=value lines. Name them distinctly:
interlab-<campaign-name>.sh
Each script must be independent — no shared state between campaign benchmarks.
Call dispatch_campaigns to register all planned campaigns and mark them as ready for execution.
For each campaign in ready status, spawn a subagent with instructions to:
/autoresearch skillinterlab.md and interlab.jsonlEach subagent operates in isolation — it runs a full /autoresearch loop for its assigned campaign.
Update interlab-multi.md:
runningPeriodically call status_campaigns to check progress across all campaigns:
After each status check, update interlab-multi.md:
When a campaign completes:
interlab.jsonl, interlab.md)depends_on campaign, dispatch the dependentIf one campaign discovers something relevant to another:
interlab.ideas.mdOnce all campaigns have completed (or been stopped), call synthesize_campaigns to aggregate results.
Update interlab-multi.md with:
## Final Summary
### Overall Results
- **Campaigns**: <total> (<completed>/<stopped>/<crashed>)
- **Total experiments**: <sum across campaigns>
### Per-Campaign Results
| # | Name | Baseline | Best | Improvement | Experiments |
|---|------|----------|------|-------------|-------------|
| 1 | <name> | <value> | <value> | <delta> (<pct>%) | <count> |
### Cross-Campaign Insights
- <insight that emerged from comparing campaign results>
### Key Wins
- <top changes across all campaigns>
### Recommendations
- <what to do next, what wasn't explored, what needs human review>
For each campaign, archive results to campaigns/<name>/:
interlab.jsonl to campaigns/<name>/results.jsonlcampaigns/<name>/learnings.md with validated insightsUpdate campaigns/README.md index table with all campaign summary rows.
Clean up working directory: remove per-campaign interlab.jsonl, interlab.md, and benchmark scripts.
Keep interlab-multi.md as the permanent multi-campaign record.
After synthesis completes, broadcast the campaign results so future sessions benefit:
For each campaign that improved its metric, call broadcast_message with:
topic: "mutation"subject: "[multi:<parent_bead>] <campaign_name> improved <metric> by <delta>%"body: JSON with the best approach for each campaign (task_type, hypothesis, quality_signal, campaign_id)This is best-effort — failure does not block synthesis completion.
Stop orchestration when ANY of these are true:
If interlab-multi.md already exists when this skill is invoked:
interlab-multi.md for full contextstatus_campaigns to check current staterunning but whose subagents are no longer activeDo not re-plan. Do not re-dispatch completed campaigns.
These are non-negotiable:
/autoresearch loop.interlab-multi.md. This is the coordination record.interlab.ideas.md — don't modify B's code directly.Overlapping file scopes without coordination
Running experiments directly instead of delegating
/autoresearch do the actual work.Ignoring cross-campaign interactions
Re-dispatching completed campaigns
Forgetting to update interlab-multi.md