From analyst
Multi-pass exhaustive investigation of a topic with entity scoring, source triangulation, and explicit gap analysis. Use when web-research Deep tier isn't sufficient — typically for unfamiliar domains, contested topics, or high-stakes decisions requiring the strongest possible evidence base.
npx claudepluginhub hpsgd/turtlestack --plugin analystThis skill is limited to using the following tools:
Conduct a multi-pass deep investigation into $ARGUMENTS.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Reviews prose for communication issues impeding comprehension, outputs minimal fixes in a three-column table per Microsoft Writing Style Guide. Useful for 'review prose' or 'improve prose' requests.
Conduct a multi-pass deep investigation into $ARGUMENTS.
This skill goes further than /analyst:web-research Deep tier. It runs multiple structured passes, scores entities by confidence, triangulates claims across source types, and produces an explicit gap analysis. Use it when you genuinely need the most thorough public-data investigation possible — not as a default.
Before searching for facts, map the domain itself.
Spend time here. Domain mapping produces a search strategy. Without it, you're searching blind.
Systematic sweep of authoritative and primary sources identified in Pass 1.
For each source:
Do not rely on summaries of primary sources. Fetch and read the source.
News coverage, analyst commentary, expert opinion, academic literature.
Search across source types independently — don't let one source type dominate:
For each significant entity (person, organisation, claim, dataset) encountered across Passes 2 and 3, assign a confidence score:
| Score | Criteria |
|---|---|
| High | Confirmed by 2+ independent primary sources; no contradicting evidence |
| Medium | Confirmed by 1 primary source or 2+ secondary sources; minor inconsistencies |
| Low | Single secondary source only; or primary source with known methodology limitations |
| Contested | Multiple credible sources actively disagree |
| Unverified | Asserted but no independent confirmation found |
This scoring step is the key difference from a standard deep research pass. Surface contested and unverified findings explicitly — they're often more important than the high-confidence ones.
For every source cited:
A cited source that can't be verified gets downgraded to Unverified regardless of how it was described.
What is genuinely unknown after exhausting public sources?
Gaps fall into categories:
Be specific about which category each gap falls into. "Information not available" without categorisation is unhelpful.
## Deep research: [Topic]
**Date:** [today]
**Passes completed:** 6
**Sources reviewed:** [count]
### Domain map
[Authoritative sources, key entities, contested terrain, temporal scope]
### Findings
#### [Theme 1]
[Findings with entity scores and inline citations]
#### [Theme 2]
[Findings with entity scores and inline citations]
### Entity confidence summary
| Entity / Claim | Score | Evidence | Contradictions |
|---|---|---|---|
| [claim] | High / Medium / Low / Contested / Unverified | [sources] | [if any] |
### Contested findings
[Where credible sources actively disagree — present each position with its evidence]
### Source verification
| Source | URL live | Date confirmed | Venue/type |
|---|---|---|---|
| [source] | Yes / No | [date] | [peer-reviewed / preprint / news / etc.] |
### Gap analysis
| Gap | Category | Access route |
|---|---|---|
| [what's unknown] | Not yet public / Paywall / Primary research needed / Unknown | [if applicable] |
### Sources
1. [Title](URL) — [authority level] — [entity score contribution]