From fieldguides
This skill should be used when researching best practices, evaluating technologies, comparing approaches, or when research, evaluation, or comparison are mentioned.
npx claudepluginhub outfitter-dev/outfitter --plugin fieldguidesThis skill uses the workspace's default tool permissions.
Systematic investigation → evidence-based analysis → authoritative recommendations.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Guides MCP server integration in Claude Code plugins via .mcp.json or plugin.json configs for stdio, SSE, HTTP types, enabling external services as tools.
Systematic investigation → evidence-based analysis → authoritative recommendations.
report-findings skill for synthesis<when_to_use>
NOT for: quick lookups, well-known patterns, time-critical debugging without investigation stage
</when_to_use>
Load the maintain-tasks skill for stage tracking. Stages advance only, never regress.
| Stage | Trigger | activeForm |
|---|---|---|
| Analyze Request | Session start | "Analyzing research request" |
| Discover Sources | Criteria defined | "Discovering sources" |
| Gather Information | Sources identified | "Gathering information" |
| Synthesize Findings | Information gathered | "Synthesizing findings" |
| Compile Report | Synthesis complete | "Compiling report" |
Workflow:
in_progresscompleted, add next in_progressFive-stage systematic approach:
1. Question Stage — Define scope
2. Discovery Stage — Multi-source retrieval
| Use Case | Primary | Secondary | Tertiary |
|---|---|---|---|
| Official docs | context7 | octocode | firecrawl |
| Troubleshooting | octocode issues | firecrawl community | context7 guides |
| Code examples | octocode repos | firecrawl tutorials | context7 examples |
| Technology eval | Parallel all | Cross-reference | Validate |
3. Evaluation Stage — Analyze against criteria
| Criterion | Metrics |
|---|---|
| Performance | Benchmarks, latency, throughput, memory |
| Maintainability | Code complexity, docs quality, community activity |
| Security | CVEs, audits, compliance |
| Adoption | Downloads, production usage, industry patterns |
4. Comparison Stage — Systematic tradeoff analysis
For each option: Strengths → Weaknesses → Best fit → Deal breakers
5. Recommendation Stage — Clear guidance with rationale
Primary recommendation → Alternatives → Implementation steps → Limitations
Three MCP servers for multi-source research:
| Tool | Best For | Key Functions |
|---|---|---|
| context7 | Official docs, API refs | resolve-library-id, get-library-docs |
| octocode | Code examples, issues | packageSearch, githubSearchCode, githubSearchIssues |
| firecrawl | Tutorials, benchmarks | search, scrape, map |
Execution patterns:
See tool-selection.md for detailed usage.
<discovery_patterns>
Common research workflows:
| Scenario | Approach |
|---|---|
| Library Installation | Package search → Official docs → Installation guide |
| Error Resolution | Parse error → Search issues → Official troubleshooting → Community solutions |
| API Exploration | Documentation ID → API reference → Real usage examples |
| Technology Comparison | Parallel all sources → Cross-reference → Build matrix → Recommend |
See discovery-patterns.md for detailed workflows.
</discovery_patterns>
<findings_format>
Two output modes:
Evaluation Mode (recommendations):
Finding: { assertion }
Source: { authoritative source with link }
Confidence: High/Medium/Low — { rationale }
Discovery Mode (gathering):
Found: { what was discovered }
Source: { where from with link }
Notes: { context or caveats }
</findings_format>
<response_structure>
## Research Summary
Brief overview — what investigated, sources consulted.
## Options Discovered
1. **Option A** — description
2. **Option B** — description
## Comparison Matrix
| Criterion | Option A | Option B |
| --------- | -------- | -------- |
## Recommendation
### Primary: [Option Name]
**Rationale**: reasoning + evidence
**Confidence**: level + explanation
### Alternatives
When to choose differently.
## Implementation Guidance
Next steps, common pitfalls, validation.
## Sources
- Official, benchmarks, case studies, community
</response_structure>
Always include:
Always validate:
Proactively flag:
ALWAYS:
in_progress at a timeNEVER:
Research vs Report-Findings:
research) covers the full investigation workflow using MCP toolsreport-findings skill covers synthesis, source assessment, and presentationUse research for technology evaluation, documentation discovery, and best practices research. Load report-findings during synthesis stage for source authority assessment and confidence calibration.