Fresh adversarial code review with binary PASS/FAIL verdicts, evidence citations, and anchoring bias prevention via fresh reviewer spawning.
npx claudepluginhub a5c-ai/babysitterThis skill is limited to using the following tools:
README.mdIndependent adversarial code review checking spec compliance. Uses binary PASS/FAIL verdicts (not subjective feedback) with required file:line evidence citations.
| Aspect | Collaborative | Adversarial |
|---|---|---|
| Goal | Help improve code | Verify spec compliance |
| Verdict | Suggestions | Binary PASS/FAIL |
| Evidence | Optional | Required (file:line) |
| Reviewer | Can be reused | Must be fresh |
| Context | Shared | Independent |
On re-review after FAIL, a NEW reviewer instance spawns with no memory of the previous review. This prevents anchoring bias where a reviewer fixates on previously identified issues.
Invoke as part of: methodologies/metaswarm/metaswarm-execution-loop (Phase 3)
Activates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
Search, retrieve, and install Agent Skills from the prompts.chat registry using MCP tools. Use when the user asks to find skills, browse skill catalogs, install a skill for Claude, or extend Claude's capabilities with reusable AI agent components.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.