From empire-research
Enumerates 3-5 candidate approaches via shallow research scan for open-ended problems, enables user-selected deep dives, parallel research, and consolidated comparisons with recommendation.
npx claudepluginhub marcoskichel/empire --plugin empire-researchThis skill uses the workspace's default tool permissions.
<section id="purpose-vs-compare">
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Explores codebases via GitNexus: discover repos, query execution flows, trace processes, inspect symbol callers/callees, and review architecture.
Share bugs, ideas, or general feedback.
explore when the solution space is open: user knows the problem, not the options/empire-research:compare instead when user already has a known set of options to evaluate head-to-head/empire-research:compare and confirm before proceeding hereAskUserQuestion with concrete optionsAfter problem confirmed, dispatch ONE research agent for broad enumeration
Agent names vary by environment; do not assume a specific agent exists
Inspect available subagents via the Agent tool's subagent_type parameter
Pick the available agent whose name/description best matches general research synthesis or broad information retrieval; if multiple candidates fit, prefer the most specific; if none fit, use the most general research-oriented agent available
Shallow agent instructions:
Enumerate 3–5 candidate approaches only
One short paragraph per approach — no deep evaluation
Required output format:
1. <Approach Name>
<One-paragraph description — what it is, how it addresses the problem>
2. <Approach Name>
...
Cap response under 300 words
Present shallow-scan output to user verbatim before proceeding
Agent tool's subagent_type parametersubagent_type value) + one-line rationale BEFORE dispatchSend single message with multiple Agent tool calls (one per approach)
Each agent receives:
Required deep agent output format:
Approach: <name>
Summary: <2-3 sentences>
Pros:
- <point>
Cons:
- <point>
Key Evidence / Citations:
- <source or concrete reference>
Fit Rating: <High / Medium / Low> — <one sentence rationale>
Cap each agent response under 500 words
| Approach | Pros | Cons | Fit |
|---|---|---|---|
Conflicts section — where agents cite contradicting evidence; state each sideRecommended approach — prioritized pick with rationale; cite supporting evidence