This skill should be used when researching best practices, evaluating technologies, comparing approaches, or when research, evaluation, or comparison are mentioned.
Conducts systematic technology research using multi-source discovery to provide evidence-based recommendations with citations.
npx claudepluginhub outfitter-dev/outfitterThis skill inherits all available tools. When active, it can use any tool Claude has access to.
references/discovery-patterns.mdreferences/source-hierarchy.mdreferences/tool-selection.mdSystematic investigation → evidence-based analysis → authoritative recommendations.
report-findings skill for synthesis<when_to_use>
NOT for: quick lookups, well-known patterns, time-critical debugging without investigation stage
</when_to_use>
<stages>Load the maintain-tasks skill for stage tracking. Stages advance only, never regress.
| Stage | Trigger | activeForm |
|---|---|---|
| Analyze Request | Session start | "Analyzing research request" |
| Discover Sources | Criteria defined | "Discovering sources" |
| Gather Information | Sources identified | "Gathering information" |
| Synthesize Findings | Information gathered | "Synthesizing findings" |
| Compile Report | Synthesis complete | "Compiling report" |
Workflow:
in_progresscompleted, add next in_progressFive-stage systematic approach:
1. Question Stage — Define scope
2. Discovery Stage — Multi-source retrieval
| Use Case | Primary | Secondary | Tertiary |
|---|---|---|---|
| Official docs | context7 | octocode | firecrawl |
| Troubleshooting | octocode issues | firecrawl community | context7 guides |
| Code examples | octocode repos | firecrawl tutorials | context7 examples |
| Technology eval | Parallel all | Cross-reference | Validate |
3. Evaluation Stage — Analyze against criteria
| Criterion | Metrics |
|---|---|
| Performance | Benchmarks, latency, throughput, memory |
| Maintainability | Code complexity, docs quality, community activity |
| Security | CVEs, audits, compliance |
| Adoption | Downloads, production usage, industry patterns |
4. Comparison Stage — Systematic tradeoff analysis
For each option: Strengths → Weaknesses → Best fit → Deal breakers
5. Recommendation Stage — Clear guidance with rationale
Primary recommendation → Alternatives → Implementation steps → Limitations
</methodology> <tools>Three MCP servers for multi-source research:
| Tool | Best For | Key Functions |
|---|---|---|
| context7 | Official docs, API refs | resolve-library-id, get-library-docs |
| octocode | Code examples, issues | packageSearch, githubSearchCode, githubSearchIssues |
| firecrawl | Tutorials, benchmarks | search, scrape, map |
Execution patterns:
See tool-selection.md for detailed usage.
</tools><discovery_patterns>
Common research workflows:
| Scenario | Approach |
|---|---|
| Library Installation | Package search → Official docs → Installation guide |
| Error Resolution | Parse error → Search issues → Official troubleshooting → Community solutions |
| API Exploration | Documentation ID → API reference → Real usage examples |
| Technology Comparison | Parallel all sources → Cross-reference → Build matrix → Recommend |
See discovery-patterns.md for detailed workflows.
</discovery_patterns>
<findings_format>
Two output modes:
Evaluation Mode (recommendations):
Finding: { assertion }
Source: { authoritative source with link }
Confidence: High/Medium/Low — { rationale }
Discovery Mode (gathering):
Found: { what was discovered }
Source: { where from with link }
Notes: { context or caveats }
</findings_format>
<response_structure>
## Research Summary
Brief overview — what investigated, sources consulted.
## Options Discovered
1. **Option A** — description
2. **Option B** — description
## Comparison Matrix
| Criterion | Option A | Option B |
| --------- | -------- | -------- |
## Recommendation
### Primary: [Option Name]
**Rationale**: reasoning + evidence
**Confidence**: level + explanation
### Alternatives
When to choose differently.
## Implementation Guidance
Next steps, common pitfalls, validation.
## Sources
- Official, benchmarks, case studies, community
</response_structure>
<quality>Always include:
Always validate:
Proactively flag:
ALWAYS:
in_progress at a timeNEVER:
Research vs Report-Findings:
research) covers the full investigation workflow using MCP toolsreport-findings skill covers synthesis, source assessment, and presentationUse research for technology evaluation, documentation discovery, and best practices research. Load report-findings during synthesis stage for source authority assessment and confidence calibration.
Activates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
Search, retrieve, and install Agent Skills from the prompts.chat registry using MCP tools. Use when the user asks to find skills, browse skill catalogs, install a skill for Claude, or extend Claude's capabilities with reusable AI agent components.
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.