From research
Research complex topics with sources, synthesis, review, and a final report.
npx claudepluginhub pgoell/pgoell-claude-tools --plugin researchThis skill uses the workspace's default tool permissions.
Orchestrator-driven deep research. The skill plans the work itself, spawns parallel deep-research subagents per topic cluster, synthesizes findings under unbounded review, and produces a polished report.
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Analyzes multiple pages for keyword overlap, SEO cannibalization risks, and content duplication. Suggests differentiation, consolidation, and resolution strategies when reviewing similar content.
Share bugs, ideas, or general feedback.
Orchestrator-driven deep research. The skill plans the work itself, spawns parallel deep-research subagents per topic cluster, synthesizes findings under unbounded review, and produces a polished report.
No authentication required. Uses the host agent's web search and fetch or browse capability, and writes to the local filesystem.
Use the host platform's equivalent tools without changing the workflow:
| Capability | Claude Code | Codex |
|---|---|---|
| Subagent dispatch | Agent tool | spawn_agent only when available and permitted. Otherwise run the phase inline. |
| Progress list | TaskCreate, TaskUpdate, TaskList | update_plan |
| Web research | WebSearch, WebFetch | web.run search and open calls, or the host browser/search tools |
| File reads | Read | shell reads such as sed, rg, or equivalent file read tools |
| Shell | Bash | shell command tool |
When a platform cannot dispatch subagents for the current request, keep the same artifact boundaries and run each phase inline in the orchestrator. Tell the user when this changes runtime or context cost.
If the request is already concrete (specific topic + scope + audience clear), skip ahead. Otherwise, ask ONE consolidated question covering scope (timeframe, geography), audience, and purpose. No multi-question ladder.
After clarification, write a brief.md to the output directory capturing the consolidated brief.
Output path is the only knob. Default: reports/{topic-slug}-{YYYY-MM-DD}/. Allow user override.
Create directories:
mkdir -p {OUTPUT_PATH}/research
YOU plan, not an agent. Decompose the brief into sub-questions and cluster them into coherent topics. As many clusters as the brief demands; each cluster should be deep enough to warrant a dedicated researcher. Cluster slugs must be unique and avoid colliding with reserved names (synthesis, gap-N-*, *-review-*).
Write {OUTPUT_PATH}/plan.md:
# Research Plan
## Brief
Topic: <topic>
Scope: <scope>
Audience: <audience>
Purpose: <purpose>
## Clusters
### Cluster: <cluster-slug-1>
Title: <human-readable title>
Sub-questions:
- SQ1: <sub-question>
- Search angles: <angle1>, <angle2>, <angle3>
- Source types: <academic, industry, etc.>
- SQ2: ...
### Cluster: <cluster-slug-2>
...
Use the progress list to seed: "Spawn researchers", "Synthesize", "Review synthesis", "Write report", "Review report". Mark tasks completed as the pipeline progresses.
Each researcher does iterative deep search on its cluster (round-by-round breadth, depth, adversarial, then iterative deepening) until two consecutive rounds add no new evidence (saturation). The researcher produces one self-contained markdown with notes and inline sources (no separate sources or notes files). For each cluster in plan.md:
researcher-prompt.md from this skill directory.{OUTPUT_PATH}/research/{cluster-slug}.md), TARGETED_GAP (empty).If a researcher returns a near-empty file, treat as failed dispatch (re-dispatch once; if still thin, escalate to user as a likely cluster-boundary problem).
synthesis-prompt.md.{OUTPUT_PATH}/research/synthesis.md exists.synthesis-reviewer-prompt.md.{OUTPUT_PATH}/research/synthesis-review-{N}.md if response is unparseable.If verdict is PASS: continue to Step 8.
If verdict is ISSUES with critical issues: classify and proceed to Step 7.
Classify each critical issue:
evidence-gap, coverage, source-quality → spawn gap-fill researcher per issue (or per coherent group of related issues).logic, structure → batch into a single re-synthesis with all such issues as feedback.For each gap-fill issue:
researcher-prompt.md. Inject: BRIEF, OUTPUT_PATH, RECIPES_PATH, CLUSTER_SLUG=, OUTPUT_FILE={OUTPUT_PATH}/research/gap-{N}-{issue-slug}.md, TARGETED_GAP=.For batched logic/structure issues:
After all gap-fills and re-syntheses complete, return to Step 6 with iteration N+1.
Stall detection. After writing synthesis-review-{N}.md, compare its issue id set to synthesis-review-{N-1}.md. If identical, surface to user immediately.
Check-in. After synthesis-review iterations 3, 6, 9, ..., pause and surface:
continue, ship as-is, interveneUpdate the progress list at every step so the user can inspect live status through the host platform.
writer-prompt.md.{OUTPUT_PATH}/report.md exists.writer-reviewer-prompt.md.If PASS: continue to Step 11.
If ISSUES with critical issues: classify and proceed to Step 10.
Classify each critical issue:
prose, flow, accuracy, format → batch into a single re-write with all such issues as feedback.content-gap-suspected → re-run synthesis-reviewer with this hypothesis (cross-loop).For batched prose/flow/accuracy/format issues:
For any content-gap-suspected issues:
synthesis-review-{prior-N+1}.md (synthesis loop counter resumes monotonically).evidence-gap / coverage / source-quality route to gap-fill research; logic / structure route to re-synthesize). When the synthesis loop closes again, re-dispatch writer with the original prose feedback (if any) plus a fresh writer review. Return to Step 9.Same as synthesis loop. Stall detection on consecutive identical id sets in report-review-{M}.md and report-review-{M-1}.md. Check-in every 3 iterations of writer-review. Same options.
Surface output path + brief summary: total iterations of each loop, final artifact paths, any minor (non-blocking) issues from the final reviews.
{output-path}/
├── brief.md
├── plan.md
├── research/
│ ├── {cluster-slug}.md (one per cluster, notes + inline sources)
│ ├── gap-{n}-{slug}.md (gap-fill research)
│ ├── synthesis.md (overwritten each iteration)
│ └── synthesis-review-{n}.md
├── report.md (overwritten each iteration)
└── report-review-{n}.md
Both reviewers return verdicts in this format:
VERDICT: PASS | ISSUES
ISSUES:
- id: <stable-id>
severity: critical | minor
category: <category>
description: <one line>
location: <pointer>
SUMMARY: <text>
A verdict of PASS with critical issues is malformed; re-dispatch the reviewer once with a format reminder. Stable ids enable stall detection: same id set in consecutive reviews means surface to user.
report-template.md for report structure (use Deep Mode section) and research-recipes.md for search patterns.