Use when comparing approaches, evaluating tools, building evidence-based decisions, or the user needs citations and industry backing. Triggers on /research, investigate, compare, evaluate.
From shieldnpx claudepluginhub infraspecdev/tesseract --plugin shieldThis skill uses the workspace's default tool permissions.
Implements structured self-debugging workflow for AI agent failures: capture errors, diagnose patterns like loops or context overflow, apply contained recoveries, and generate introspection reports.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Research a technical topic and produce a well-sourced document with direct quotes, industry references, and a clear recommendation.
Write the final document using the Write tool to exactly this path:
{output_dir}/{feature-name}-YYYYMMDD/research/{N}-{slug}/findings.md
Where:
{output_dir} — read from .shield.json output_dir field (default: docs/shield){feature-name}-YYYYMMDD — the feature folder (ask user if no active feature context: "No active feature context. What feature name should this go under?"){N} — run number (count existing folders in {feature}/research/ + 1){slug} — slugified research topic (lowercase, hyphens only, max 50 chars)Do NOT use any other path, filename, or directory. The Write tool creates directories automatically. After writing, update {output_dir}/manifest.json and regenerate {output_dir}/index.html.
shield:plan-docsshield:review or shield:plan-reviewAt startup, call execute-steps to register these steps. Execute them in order, updating status after each.
| Step | Action | Condition | Mandatory |
|---|---|---|---|
| 1 | Clarify topic | skip if user provided context | No |
| 2 | PM framing | always | Yes |
| 3 | Parallel research agents (3) | always | Yes |
| 4 | Synthesize findings | always | Yes |
| 5 | PM review | always | Yes |
| 6 | Write document + update manifest | always | Yes |
{output_dir}/{feature}/research/{N}-{slug}/findings.md — the Write tool creates directories automaticallySkip if the user already provided enough context. Otherwise ask:
Dispatch the PM agent in research-framing mode with the research topic as input. Use the Agent tool with subagent_type: "shield:product-manager-reviewer".
The agent returns a structured brief with: stakeholders, decision(s) to make, success criteria, prioritized research questions, scope boundaries, and timeline constraints.
This output shapes the research agent prompts in the next step. If PM framing fails or times out, proceed with research without framing context — do not block the workflow.
Launch 3 parallel agents to maximize coverage:
Each agent's prompt includes the PM framing context (if available):
Research [topic] from [source type].
Context from product analysis:
- Stakeholders: [from PM framing]
- Key questions to answer: [from PM framing, prioritized]
- Scope: [from PM framing]
- Timeline: [from PM framing]
Return direct quotes with attribution, source URLs, key data points. Prioritize findings that address the key questions above.
If PM framing was skipped, use the original prompt: Research [topic] from [source type]. Return direct quotes with attribution, source URLs, key data points.
After synthesizing findings, dispatch the PM agent in research-review mode with the synthesized findings as input. Use the Agent tool with subagent_type: "shield:product-manager-reviewer".
The agent returns a hybrid output: narrative sections (User Impact Analysis, Scope Recommendation, Prioritization Framework, Stakeholder Summary) plus a graded scorecard (PM1-PM10).
Include this output as a ## Product Lens section in the final document, placed after ## Summary and before ## References.
Output: {output_dir}/{feature}/research/{N}-{slug}/findings.md
# [Decision Title]
**Status:** Proposed
**Date:** [date]
**Context:** [1-2 sentences on what prompted this]
## Decision
[Clear statement of the recommended approach]
## Why Not [Alternative]?
[Comparison table if applicable]
## What the Industry Recommends
### [Source Name] ([Role/Context])
> *"Direct quote"*
> — [Source with link]
[Repeat for 4-8 authoritative sources]
## How This Works in Practice
[Concrete examples relevant to the user's setup]
## Migration Path / Reversibility
[How to change course if the decision turns out wrong]
## Summary
[2-3 sentence wrap-up tying recommendation to evidence]
## Product Lens
[PM review output — narrative sections and scorecard from the PM agent's research-review mode]
## References
- [All source URLs as clickable links]
| Mistake | Fix |
|---|---|
| Skipping PM framing because "the topic is straightforward" | Always run PM framing — it prioritizes research questions and sets scope boundaries, preventing unfocused research |
| Paraphrasing sources instead of using direct quotes | Use the original wording with attribution — paraphrases lose credibility and make claims harder to verify |
| All 3 research agents returning the same sources | Each agent has a distinct source type (official docs, industry voices, community) — if they overlap, the prompts need sharper differentiation |
| Writing Product Lens as a separate document instead of a section | PM review output goes inside findings.md as ## Product Lens between Summary and References |
| Including recommendations without source backing | Every recommendation must trace to at least one cited source — unsupported opinions violate rule 1 |
| Omitting dissenting views when sources agree | If all sources agree, state that explicitly — the absence of disagreement is itself a finding worth noting |