From consensus-loop
Guides writing evidence packages for consensus-loop watch file in code reviews. Covers claims, changed files, executable tests, results, risks, tag lifecycles, and rejection fixes.
npx claudepluginhub berrzebb/claude-plugins --plugin consensus-loopThis skill uses the workspace's default tool permissions.
When submitting code changes for consensus review, write a properly structured evidence package in the watch file.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Guides MCP server integration in Claude Code plugins via .mcp.json or plugin.json configs for stdio, SSE, HTTP types, enabling external services as tools.
When submitting code changes for consensus review, write a properly structured evidence package in the watch file.
Read ${CLAUDE_PLUGIN_ROOT}/config.json first:
consensus.watch_file → submission pathconsensus.trigger_tag / agree_tag / pending_tag → actual tag valuesplugin.respond_file → auditor verdict fileplugin.locale → localeFollow the format defined in ${CLAUDE_PLUGIN_ROOT}/templates/references/${locale}/evidence-format.md.
Required sections: Claim, Changed Files, Test Command, Test Result, Residual Risk.
Key rules:
[trigger_tag] → auditor reviews → [agree_tag] or [pending_tag]
↓
Fix issues, re-submit with [trigger_tag]
When auditor returns [pending_tag]:
test-gap, claim-drift, scope-mismatch)[trigger_tag] to trigger a new audit cycleFull rejection code reference: ${CLAUDE_PLUGIN_ROOT}/templates/references/${locale}/rejection-codes.md