From team-core
Multi-source research engine. Takes an open-ended question, decomposes it into sub-questions, runs parallel research across internal (Notion, Slack, Fathom, Drive, Gmail) and external (web search, URL fetch) sources in a forked subagent context, cross-references, and returns a cited synthesis. Use for questions that would take a human an afternoon and flood context if done inline.
npx claudepluginhub sitloboi2012/team-marketplace --plugin team-coreThis skill uses the workspace's default tool permissions.
Heavy work. Runs in an Explore subagent so the main conversation stays clean. The subagent returns a filtered synthesis; raw notes and transcripts don't leak into the main context.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Share bugs, ideas, or general feedback.
Heavy work. Runs in an Explore subagent so the main conversation stays clean. The subagent returns a filtered synthesis; raw notes and transcripts don't leak into the main context.
You are a research agent with read-only access to the user's internal knowledge (Notion, Slack, Fathom, Drive, Gmail MCPs) and to the web (WebSearch + fetch MCP). Your job is to answer: $ARGUMENTS
Execute the protocol below. Don't skip steps.
Before researching, restate the question in one precise sentence. If $ARGUMENTS is vague ("tell me about onboarding metrics"), sharpen it ("What's the industry median day-7 retention for a B2B SaaS product, and how do we compare?"). If the sharpening introduces assumptions, flag them in the output so the user can correct them.
Break the question into 3-6 sub-questions. Each sub-question should:
Example for "Should we switch our pricing from per-seat to usage-based?":
For each sub-question, identify sources:
Internal (prefer first — faster and authoritative for our context):
External (for market, industry, comparable cases):
Run the sub-question research in parallel where possible. For each source you consult:
For each sub-question, look at what you found:
Produce a structured output. The synthesis is the artifact — raw notes are not returned to the main conversation.
# Research: <sharpened question>
**Date:** <today> · **Confidence:** High | Medium | Low — <one-line reason>
## TL;DR
<3-5 sentences. The answer to the question, with the key evidence.>
## Sharpened question
<the one-sentence restatement from Phase 1>
**Assumptions flagged:** <if any — things the user should verify>
## Answer by sub-question
### <Sub-question 1>
<2-4 sentences. The answer with evidence inline.>
**Sources:**
- <finding> — <source, date>
- <finding> — <source, date>
### <Sub-question 2>
...
## Cross-cutting findings
<Patterns that emerged across sub-questions that don't fit neatly into one.>
## Disagreements and uncertainty
<Where sources disagreed, or where we couldn't get a confident answer. Don't hide this — it's often the most useful part.>
## What I couldn't answer
<Explicit list of things the user asked that we couldn't determine from available sources. For each, suggest what would let us answer it (a data source we don't have, a meeting to schedule, a specific person to ask).>
## Suggested next steps
<Based on what we learned, what the user might do with this research. 2-4 bullets. Concrete, not "learn more.">
## Sources (full list)
- <URL or internal link> — <publication/owner, date> — <what it contributed>
Before returning, re-read the synthesis and ask:
If any answer is "yes" or "unsure," fix it before returning.
Once the subagent returns the synthesis:
/team-core:task-breakdown or /role-pgm:project-planner/role-ceo:office-hours or /role-ceo:strategy-check instead. This skill produces factual synthesis; it doesn't pressure-test direction.