Use when analyzing a large corpus of text, code, or data that exceeds a single agent's effective context - orchestrates parallel Worker subagents, Critic review subagents, and a final Summarizer subagent with task tracking and failure recovery
Orchestrates parallel analysis of large datasets by distributing work across specialized subagents with redundancy and synthesis.
npx claudepluginhub ed3dai/ed3d-pluginsThis skill inherits all available tools. When active, it can use any tool Claude has access to.
compute_layout.pydiagram-templates.mdDivide a corpus across Worker subagents, review with Critic subagents, synthesize with a Summarizer. Every stage writes to files; every subagent gets its own task.
Corpus → [Workers] → [Critics] → Summarizer → Report
Workers each analyze a slice of the corpus. Critics each review all Worker reports for a subset of segments, checking for gaps and inconsistencies. A single Summarizer reads all Critic reports and produces the final output.
If the user's intent is not already clear, ask two questions using AskUserQuestion:
Question 1: What to analyze. Ask what corpus to analyze and what the analysis goal is. Skip if obvious from context.
Question 2: Effort level. Present these options in this order (do not reorder to put recommended first):
| Level | SEGMENTS_PER | REVIEWS_PER | When to use |
|---|---|---|---|
| Some effort | 3 | 2 | Default for most analyses |
| A lot of effort | 3 | 3 | When thoroughness matters more than speed |
| Herculean effort | 2 | 3 | When you cannot afford to miss anything |
Recommend one if you have enough context, by appending "(Recommended)" to that option's label. But keep the options in the order shown above regardless.
Definitions:
SEGMENTS_PER — how many corpus segments each Worker processesREVIEWS_PER — how many independent Critic reviews each segment receivesYou need to determine how many segments, workers, and critics the analysis requires. This depends on corpus size and agent context capacity.
If you have file paths, estimate tokens:
Use the Bash tool to count characters: wc -c file1 file2 ... or find /path -type f -exec cat {} + | wc -c.
For more precise estimates, run the compute_layout.py script bundled with this skill:
python3 /path/to/compute_layout.py --corpus-chars 800000 --segments-per 3 --reviews-per 2
python3 /path/to/compute_layout.py --corpus-files file1.txt file2.txt --segments-per 3 --reviews-per 2
python3 /path/to/compute_layout.py --corpus-tokens 200000 --segments-per 3 --reviews-per 2 --json
If you cannot run the script, compute by hand. Use the Bash tool with python3 -c "..." for all arithmetic — do not compute in your head.
Agent capacity:
AGENT_CONTEXT = 200,000 tokens
RESERVED = 35% (for prompt, reasoning, output)
AVAILABLE = AGENT_CONTEXT * 0.65 = 130,000 tokens
SEGMENT_BUDGET = AVAILABLE / SEGMENTS_PER
Segment count:
OVERLAP = 10% of SEGMENT_BUDGET
STRIDE = SEGMENT_BUDGET - OVERLAP
SEGMENT_COUNT = ceil((CORPUS_TOKENS - SEGMENT_BUDGET) / STRIDE) + 1
If CORPUS_TOKENS <= SEGMENT_BUDGET, then SEGMENT_COUNT = 1 (no fan-out needed).
Agent counts:
WORKER_COUNT = ceil(SEGMENT_COUNT / SEGMENTS_PER)
TOTAL_CRITIC_ASSIGNMENTS = SEGMENT_COUNT * REVIEWS_PER
CRITIC_COUNT = ceil(TOTAL_CRITIC_ASSIGNMENTS / SEGMENTS_PER)
SEGMENTS_PER consecutive segments of raw corpus and writes an analysis report.REVIEWS_PER different Critics (redundancy for thoroughness).The critic count tells you how many critics to create, but you also need to decide which segments each critic reviews. Use round-robin assignment to distribute REVIEWS_PER critic passes evenly across segments:
For each segment S (1 to SEGMENT_COUNT):
Assign REVIEWS_PER different critics to review S
Rotate through critics: critic index = (S * review_pass + offset) % CRITIC_COUNT
In practice, use python3 -c "..." to generate the assignment table. Example for 6 segments, 4 critics, REVIEWS_PER=2:
C01 reviews: S01, S03, S05
C02 reviews: S02, S04, S06
C03 reviews: S01, S04, S06
C04 reviews: S02, S03, S05
Each segment appears in exactly 2 critics' lists. Each critic reads the Worker reports that cover its assigned segments. Include this assignment table in the orchestration plan so the mapping is explicit and verifiable.
If the user specified a working directory, use it. Otherwise, create one:
WORK_DIR=$(mktemp -d -t fanout-XXXXXX)
mkdir -p "$WORK_DIR/segments" "$WORK_DIR/workers" "$WORK_DIR/critics"
All paths in prompts and file references are absolute paths. Subagents cannot resolve relative paths reliably.
Enter plan mode. Write a plan document that includes:
W01-W10) into summary nodes. If the user requests Graphviz instead, use the DOT template from the same file.W01: S01-S03)REVIEWS_PER times across all critics.Do not include time estimates in the plan. Agent execution time is unpredictable and estimates are misleading.
Exit plan mode. Do not proceed until the user approves the plan.
Worker nodes should show their segment assignments: W01<br/>S01-S03. Critic nodes show their review scope. Cap visible nodes at ~15; collapse ranges for larger layouts. See diagram-templates.md for full Mermaid and Graphviz templates with styling.
Before launching any subagents, create ALL tasks upfront using TaskCreate:
W01, W02, ...)C01, C02, ...)Then set up dependencies with TaskUpdate addBlockedBy:
This creates the full dependency graph before any work starts.
Mark Worker tasks as in_progress, then launch all Workers in parallel (one Task tool call per worker, all in the same message).
Each Worker gets a prompt structured like this:
You are {WORKER_NAME}, a corpus analysis worker.
## Your Assignment
Analyze segments {FIRST_SEG} through {LAST_SEG} of the corpus.
## Input
Read these files:
- {ABSOLUTE_PATH_TO_SEGMENT_FILE_1}
- {ABSOLUTE_PATH_TO_SEGMENT_FILE_2}
- ...
## Analysis Goal
{WHAT_THE_USER_WANTS_ANALYZED}
## Output Format
Write your report to: {ABSOLUTE_PATH_TO_WORK_DIR}/workers/{WORKER_NAME}.md
Structure your report as:
### Summary
2-3 sentence overview of findings for your segments.
### Detailed Findings
For each significant finding:
- **Finding**: one-line description
- **Location**: file/section where found
- **Evidence**: relevant quote or reference
- **Significance**: why this matters
### Segment Coverage
List each segment you analyzed and confirm you read it completely.
If any segment was too large to process fully, state which parts you skipped.
Adapt the analysis goal and output format to match what the user asked for. The template above is a starting point — be specific about what constitutes a "finding" for this particular analysis.
Verify each worker wrote its output file:
ls -la "$WORK_DIR/workers/"
Mark completed Worker tasks as completed. If any Worker failed or did not produce output, see Failure Recovery below.
Mark Critic tasks as in_progress, then launch all Critics in parallel.
You are {CRITIC_NAME}, reviewing Worker analyses for segments {SEG_RANGE}.
## Input
Read these Worker reports:
- {ABSOLUTE_PATH_TO_WORK_DIR}/workers/{WORKER_1}.md
- {ABSOLUTE_PATH_TO_WORK_DIR}/workers/{WORKER_2}.md
- ...
## Your Task
1. Read all Worker reports listed above.
2. Evaluate completeness: did the Workers cover their segments thoroughly?
3. Identify cross-segment patterns the Workers may have missed individually.
4. Flag contradictions between Worker reports.
5. Note any gaps — segments or topics that were under-analyzed.
## Output Format
Write your review to: {ABSOLUTE_PATH_TO_WORK_DIR}/critics/{CRITIC_NAME}.md
Structure your review as:
### Cross-Segment Patterns
Themes or findings that span multiple Workers' segments.
### Quality Assessment
For each Worker report you reviewed:
- Coverage: complete / partial / insufficient
- Accuracy: any factual issues or misinterpretations
### Gaps and Contradictions
Anything missing or conflicting across reports.
### Consolidated Key Findings
The most important findings from the segments you reviewed, after accounting for quality.
Verify output files and mark tasks completed. Handle failures per Failure Recovery.
Mark the Summarizer task as in_progress. Launch a single Summarizer subagent.
You are the Summarizer, producing the final analysis report.
## Input
Read all Critic reviews:
- {ABSOLUTE_PATH_TO_WORK_DIR}/critics/{CRITIC_1}.md
- {ABSOLUTE_PATH_TO_WORK_DIR}/critics/{CRITIC_2}.md
- ...
You may also reference Worker reports for detail:
- {ABSOLUTE_PATH_TO_WORK_DIR}/workers/*.md
## Your Task
Synthesize all Critic reviews into a single cohesive report. Prioritize the Critics' consolidated findings and cross-segment patterns.
## Output
Write the final report to: {ABSOLUTE_PATH_TO_WORK_DIR}/final-report.md
Structure:
### Executive Summary
3-5 sentences: what was analyzed, what was found, what matters most.
### Key Findings
The most significant findings, ordered by importance. Each finding should include supporting evidence from the Critic and Worker reports.
### Detailed Analysis
Full narrative organized by theme or topic.
### Methodology Notes
- Corpus size, segment count, agent layout
- Any gaps, failures, or limitations encountered during analysis
### Appendix
- List of all Worker and Critic reports with paths
The Summarizer should return a brief (2-3 sentence) summary of findings to you, and defer the full explanation to the written file. Return the file path to the user.
If a subagent fails because it hit its context limit:
W03a, W03b.C01a, C01b.completed (it was replaced, not failed).If a subagent completes but its output file does not exist:
If you have retried the same agent 3 times with similar failures, stop. Report what you expected, what happened, and what assumption might be wrong. Do not keep retrying.
| Parameter | Some Effort | A Lot of Effort | Herculean |
|---|---|---|---|
| SEGMENTS_PER | 3 | 3 | 2 |
| REVIEWS_PER | 2 | 3 | 3 |
| Agent context reserved | 35% | 35% | 35% |
| Segment overlap | 10% | 10% | 10% |
| Default agent type | sonnet-general-purpose | sonnet-general-purpose | sonnet-general-purpose |
| Agent Naming | Convention |
|---|---|
| Workers | W01, W02, ... W99 |
| Split workers | W03a, W03b |
| Critics | C01, C02, ... C99 |
| Split critics | C01a, C01b |
| Summarizer | (just "Summarizer") |
Every file path in every subagent prompt is an absolute path. Subagents do not inherit your working directory. If you write /tmp/fanout-abc123/workers/W01.md, that exact string appears in the prompt — never ./workers/W01.md or workers/W01.md.
Create a task (TaskCreate) for every item below. Mark each in_progress before starting it, completed after finishing. Do not skip items or batch them.
wc -c for character counts)python3 -c "..." — never mental math)python3 -c "...")segments/, workers/, critics/ subdirectorieslslsExpert guidance for Next.js Cache Components and Partial Prerendering (PPR). **PROACTIVE ACTIVATION**: Use this skill automatically when working in Next.js projects that have `cacheComponents: true` in their next.config.ts/next.config.js. When this config is detected, proactively apply Cache Components patterns and best practices to all React Server Component implementations. **DETECTION**: At the start of a session in a Next.js project, check for `cacheComponents: true` in next.config. If enabled, this skill's patterns should guide all component authoring, data fetching, and caching decisions. **USE CASES**: Implementing 'use cache' directive, configuring cache lifetimes with cacheLife(), tagging cached data with cacheTag(), invalidating caches with updateTag()/revalidateTag(), optimizing static vs dynamic content boundaries, debugging cache issues, and reviewing Cache Component implementations.
Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand colors or style guidelines, visual formatting, or company design standards apply.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.