Parallel sub-agent research with persistent reports — decompose questions, dispatch agents, synthesize findings
Decomposes research questions into parallel sub-agent investigations and synthesizes findings into persistent, citation-backed reports.
npx claudepluginhub jugrajsingh/skillgardenThis skill is limited to using the following tools:
references/report-structure.mdDecompose a research question into sub-questions, dispatch parallel agents, and synthesize findings into a persistent report with file:line citations.
Research question from $ARGUMENTS.
If $ARGUMENTS is empty, ask via AskUserQuestion:
- question: "What would you like to research?"
options:
- "How does X work?"
- "Where is X implemented?"
- "What patterns does X use?"
- "Compare approaches for X"
Break the research question into 2-4 sub-questions. Each sub-question maps to an agent type based on its nature:
| Question Type | Agent | Model | Purpose |
|---|---|---|---|
| WHERE is X? | locator | haiku | Find file paths grouped by purpose |
| HOW does X work? | analyzer | sonnet | Trace data flow, describe patterns |
| WHY is X designed this way? | analyzer | sonnet | Architectural decisions, trade-offs |
| WHAT PATTERNS does X follow? | pattern-finder | sonnet | Find similar implementations, variations |
Question: "How does authentication work in this project?"
| # | Sub-question | Agent |
|---|---|---|
| 1 | Where are the authentication files? | locator |
| 2 | How does the login flow process requests? | analyzer |
| 3 | What patterns do the auth middleware follow? | pattern-finder |
Create a URL-safe slug from the research question:
how-does-authentication-workDispatch agents using the Task tool. Maximum 5 parallel agents.
For each sub-question, spawn a Task with:
Prompt the Task with:
You are a locator agent. Find WHERE the following exists in the codebase.
Sub-question: {sub-question}
Project root: {project path}
Search using Glob for file patterns and Grep for content.
Use synonym expansion — search multiple term variations:
"config" also search "settings", "options", "preferences", "conf"
"error" also search "exception", "failure", "fault"
"handler" also search "processor", "listener", "callback"
"auth" also search "login", "session", "token", "credential"
Output format — group files by purpose:
## Files Found
### Implementation
- path/to/file.py — {brief description}
### Tests
- tests/test_file.py — {brief description}
### Configuration
- config/settings.yaml — {brief description}
### Types/Interfaces
- types/models.py — {brief description}
### Documentation
- docs/feature.md — {brief description}
Prompt the Task with:
You are an analyzer agent. Understand HOW the following works.
Sub-question: {sub-question}
Project root: {project path}
Known files: {locator output if available, otherwise "discover via search"}
Read relevant files. Trace data flow. Document with file:line references.
Every technical claim MUST include a file:line citation.
If uncertain, mark with triangle (caveat indicator).
Output format:
## Analysis: {sub-question}
### Summary
{2-3 sentence answer}
### Data Flow
1. Entry point: path/file.py:42 — {description}
2. Processing: path/other.py:15 — {description}
3. Output: path/result.py:88 — {description}
### Patterns Observed
- {pattern name}: file.py:10-25 — {how it works}
### Architectural Notes
- {observation with file:line citation}
Prompt the Task with:
You are a pattern-finder agent. Find existing code patterns to model after.
Sub-question: {sub-question}
Project root: {project path}
Known files: {locator output if available, otherwise "discover via search"}
Find multiple instances of the same pattern. Show each variation with context.
Use synonym expansion for search terms.
Max 20 lines per code snippet.
Output format:
## Patterns: {what was searched}
### Variation 1: {location}
File: path/to/file.py:15-30
{code snippet}
Context: {why this instance is relevant}
### Variation 2: {location}
File: path/to/other.py:42-55
{code snippet}
Context: {how this differs from variation 1}
### Recommendation
{which variation to follow and why}
After every 2 search/read operations within any agent, save intermediate findings.
docs/plans/{slug}-findings.md exists, append findings thereGather outputs from all completed agents. Check for conflicts:
| Conflict Type | Resolution |
|---|---|
| Different files cited for same function | Verify which is current via git log |
| Contradictory behavior descriptions | Re-read the disputed file, report both interpretations |
| Missing coverage | Note in Gaps section |
Flag any conflicts explicitly in the report. Do not silently resolve them.
Read the template at ${CLAUDE_PLUGIN_ROOT}/templates/research-report.md.
Write the report to docs/research/{slug}-report.md with this structure:
---
question: {original research question}
date: {YYYY-MM-DD}
agents: [locator, analyzer, pattern-finder]
status: complete | partial | inconclusive
---
Status meanings:
| Status | Meaning |
|---|---|
| complete | All sub-questions answered with citations |
| partial | Some sub-questions unanswered or missing citations |
| inconclusive | Conflicting findings or insufficient evidence |
2-3 sentence direct answer to the original question. No hedging. If uncertain, state what IS known and what IS NOT.
One section per sub-question. Each section must include:
Link related findings across sections. Example:
The auth middleware (see Section 2) uses the token format defined in Section 3.
What could not be determined and why. Include:
If docs/plans/{slug}-findings.md exists:
| Rule | Rationale |
|---|---|
| file:line references mandatory | Every technical claim must be verifiable |
| Synonym expansion for searches | Single terms miss aliased concepts |
| Progressive disclosure | Overview first, details on demand |
| Max 5 parallel agents | Resource and context limits |
| 2-Action Rule | Prevent progress loss on long research |
| No fabricated paths | If not found, report that clearly |
| Primary Term | Also Search |
|---|---|
| config | settings, options, preferences, conf, configuration |
| error | exception, failure, fault, err |
| handler | processor, listener, callback, hook |
| auth | login, session, token, credential, authentication |
| test | spec, check, verify, assertion |
| model | schema, entity, record, type |
| route | endpoint, path, url, api |
| store | repository, dao, persistence, database, db |
| validate | check, verify, sanitize, parse |
| transform | convert, map, serialize, deserialize |
Present the completed report path and a brief summary:
## Research Complete
Report: docs/research/{slug}-report.md
Status: {status}
Agents: {count} dispatched, {count} successful
### Key Findings
- {finding 1 with file:line}
- {finding 2 with file:line}
- {finding 3 with file:line}
### Gaps
- {gap 1}
Expert guidance for Next.js Cache Components and Partial Prerendering (PPR). **PROACTIVE ACTIVATION**: Use this skill automatically when working in Next.js projects that have `cacheComponents: true` in their next.config.ts/next.config.js. When this config is detected, proactively apply Cache Components patterns and best practices to all React Server Component implementations. **DETECTION**: At the start of a session in a Next.js project, check for `cacheComponents: true` in next.config. If enabled, this skill's patterns should guide all component authoring, data fetching, and caching decisions. **USE CASES**: Implementing 'use cache' directive, configuring cache lifetimes with cacheLife(), tagging cached data with cacheTag(), invalidating caches with updateTag()/revalidateTag(), optimizing static vs dynamic content boundaries, debugging cache issues, and reviewing Cache Component implementations.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.