Use when any workflow needs a product lens — user impact analysis, prioritization, scope discipline, or stakeholder framing. Triggers on product analysis, PM review, user impact, business value.
From shieldnpx claudepluginhub infraspecdev/tesseract --plugin shieldThis skill uses the workspace's default tool permissions.
Implements structured self-debugging workflow for AI agent failures: capture errors, diagnose patterns like loops or context overflow, apply contained recoveries, and generate introspection reports.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Thin orchestrator that dispatches the product-manager-reviewer agent with the right mode and context. The domain knowledge lives entirely in the agent — this skill only determines the mode, gathers input, and dispatches.
research skill to frame questions before research or review findings after synthesisplan-review skill to add a PM perspective on plansshield:review instead| Context | Mode |
|---|---|
| Called from research skill before research agents run | research-framing |
| Called from research skill after synthesis | research-review |
| Called from plan-review skill | plan-review |
| Called directly or from any other workflow | standalone (default) |
The calling skill passes the mode explicitly. Standalone is the default when no mode is specified.
standalonesubagent_type: "shield:product-manager-reviewer". The agent definition is loaded automatically by the Agent tool — do NOT manually read the agent file.The prompt parameter for the Agent tool should contain:
You are a Technical Product Manager reviewing [input type].
Mode: [research-framing|research-review|plan-review|standalone]
Input:
{the input material}
[For evaluative modes only — read and include scoring.md:]
Scoring rubric:
{content of ${CLAUDE_PLUGIN_ROOT}/skills/general/plan-review/scoring.md}
Operate in the specified mode. Follow the Review Process and Output Format for that mode exactly.
Notes:
research-review, plan-review, standalone). Omit it for research-framing.[input type] should reflect the mode: "a research topic" (framing), "research findings" (research-review), "an implementation plan" (plan-review), or "the following input" (standalone).scoring.md relative to the plugin root: ${CLAUDE_PLUGIN_ROOT}/skills/general/plan-review/scoring.md.| Mistake | Fix |
|---|---|
| Manually reading the agent file before dispatching | The Agent tool with subagent_type loads the agent definition automatically — don't double-inject |
| Skipping PM framing in research because "the topic is clear enough" | Always run framing — it shapes research agent prompts, not just the user's understanding |
| Including scoring rubric in research-framing mode | Framing produces a structured brief, not a graded scorecard — omit the rubric |
| Treating PM review output as optional | The Product Lens section is a required part of the research document |
| Using relative paths for scoring.md | Use ${CLAUDE_PLUGIN_ROOT}/skills/general/plan-review/scoring.md |