From empire-research
Produces side-by-side matrices comparing tools, libraries, frameworks, vendors, or architectural choices across user-defined dimensions and recommends a winner for known options.
npx claudepluginhub marcoskichel/empire --plugin empire-researchThis skill uses the workspace's default tool permissions.
<section id="purpose-vs-explore">
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Explores codebases via GitNexus: discover repos, query execution flows, trace processes, inspect symbol callers/callees, and review architecture.
Share bugs, ideas, or general feedback.
compare when the user already has a known finite set of options to evaluate/empire-research:explore instead when the solution space is open and options need to be enumerated first/empire-research:explore and confirm before proceeding heredefault-dimensions)/empire-research:exploreWhen the user does not specify dimensions, suggest the relevant subset of:
User can add, remove, or reweight dimensions before dispatch.
Agent tool's subagent_type parametersubagent_type value) + one-line rationale BEFORE dispatchSend single message with multiple Agent tool calls (one per option)
Each agent receives:
Required per-option output format:
Option: <name>
Summary: <2-3 sentences>
Per-dimension scoring:
| Dimension | Score (1-5) | Evidence | Notes |
|---|---|---|---|
Pros:
- <point>
Cons:
- <point>
Key citations:
- <source>
Cap each agent response under 400 words
After all option-agents return, produce side-by-side matrix
Required output structure:
## Comparison Matrix
| Dimension | Option A | Option B | Option C |
|---------------|----------|----------|----------|
| <dimension 1> | 4 — note | 3 — note | 5 — note |
...
| TOTAL | 22 | 19 | 27 |
Apply user-supplied weights to dimensions if provided; otherwise unweighted sum
Highlight cells where one option dominates or underperforms by a clear margin
## Conflicts section — where agents cite contradicting evidence; state each side
## Recommendation — winner + rationale + when the runner-up would be the better pick (decision criteria)
## Caveats — known unknowns, sources of uncertainty, freshness of data
MUST cite sources from agent reports
MUST present report then stop; ask user whether to proceed with chosen option
[Confirmed], [Estimated], or [Inferred]:
[Inferred] data as confirmed fact