Quality gate for Claude Code
npx claudepluginhub evil-mind-evil-sword/idleQuality gate for Claude Code. Use #alice to enable review. Skills: reviewing, researching, issue-tracking, technical-writing, bib-managing.
Production-ready workflow orchestration with 79 focused plugins, 184 specialized agents, and 150 skills - optimized for granular installation and minimal token usage
Claude Code marketplace entries for the plugin-safe Antigravity Awesome Skills library and its compatible editorial bundles.
Browser automation for AI agents
Quality gate plugin for Claude Code. Blocks Claude from stopping (via Claude Code's Stop hook) until work passes review by an independent agent. The independent agent uses consensus review when possible (asking Codex and/or Gemini for second opinions).
Be aware:
For best results, I would mix in Codex and/or Gemini -- just install the CLIs and auth them, alice will pick them up automatically. Mixing multiple agents into the review process seems to really improve the steering.
What this plugin doesn't solve: what you desire or how you communicate it. Be clear about what you want before you turn it on.
curl -fsSL https://evil-mind-evil-sword.github.io/releases/alice/install.sh | sh
This installs:
jwz - Agent messagingtissue - Issue trackingjq - JSON parsing (if needed)Those other two binaries (jwz and tissue) are small Zig programs which allow Claude Code to store issues, messages, retain state (all in JSONL + SQLite, like beads) -- and are used by alice to track the state required to enforce the reviewer pattern (as well as giving Claude Code a place to store issues, research notes, etc). The plugin assumes these binaries are available and contains explicit instructions for how the agent should use them. The goal here is to make it easy to install these and get started (meaning: the goal is you shouldn't have to think about them!)
#alice <your prompt>
alice uses the UserInput hook to look at your prompt, parse it, and see if you've invoked #alice. It then uses jwz to set a session message, enabling the Stop hook.
Review is opt-in per-prompt. After alice approves, the gate resets automatically.
LLMs struggle to reliably evaluate their own outputs (Huang et al., 2023). A model asked to verify its work tends to confirm rather than critique. This creates a gap in agentic coding workflows—agents can exit believing they've completed a task when issues remain.
Research on multi-agent debate suggests a path forward: models produce more accurate outputs when they critique each other (Du et al., 2023; Liang et al., 2023).
alice applies this idea: rather than prompting agents to review themselves, it blocks exit until an independent reviewer (alice, a subagent) explicitly approves.
Agent works → tries to exit → Stop hook → alice reviewed? → block/allow
#alice at start of prompt enables review (using session state stored via jwz)COMPLETE allows exit, ISSUES keeps agent working