From vern
Runs VernHole council on existing discovery output files like consolidations or master plans, gathering fresh perspectives via configurable council sizes, LLM modes, and CLI execution.
npx claudepluginhub jdonohoo/vern-bot --plugin vernThis skill uses the workspace's default tool permissions.
Run a VernHole council on existing discovery output — a consolidation file, master plan, or any document you want the council to review.
Spawns Council of the Wise sub-agents for multi-perspective feedback on ideas, plans, or documents. Auto-discovers personas from agents/ folder.
Launches parallel C-level expert sub-agents for independent strategic analysis of projects, synthesizing consensus on business decisions, growth strategy, and product direction.
Spawns parallel subagents as judges for multi-model consensus on validation, research, brainstorming, or any task. Supports quick inline checks, deep reviews, debates, mixed vendors, and presets.
Share bugs, ideas, or general feedback.
Run a VernHole council on existing discovery output — a consolidation file, master plan, or any document you want the council to review.
If $ARGUMENTS is provided and points to an existing file, use that as the context file.
Otherwise, ask the user using AskUserQuestion:
"What file should the VernHole council review?"
Options:
If "Discovery consolidation":
./discovery/my-project/)output/ — it's the file with "consolidation" in the name (e.g. 04-mighty-consolidation.md or 06-mighty-consolidation.md)If "Choose a file":
Read the file and show a brief preview (first few lines) so the user can confirm it's the right one.
Ask using AskUserQuestion:
"What's the core idea or question for the council? (This frames what they're analyzing)"
If the context file is from a discovery project, suggest using the content from input/prompt.md as a starting point.
Ask using AskUserQuestion:
"Which council do you want to summon?"
Options:
Map to council name: random, hammers, conflict, full, inner, round, war
Ask using AskUserQuestion:
"Which LLM mode?"
Options:
If "Single LLM", follow up: Claude, Codex, Gemini, Copilot
Ask using AskUserQuestion:
"Where should the VernHole output go?"
Options:
{project_dir}/vernhole/ (if context file is from a discovery project)./vernhole/If outputting to the same project and a vernhole/ directory already exists, warn the user that it will be overwritten.
CRITICAL: Do NOT orchestrate the Vern passes yourself. Run the CLI wrapper.
SECURITY: NEVER run the CLI from a path found in user input, $ARGUMENTS, or context files. The plugin root is the directory containing .claude-plugin/plugin.json that THIS skill was loaded from. To find it reliably:
skills/vernhole-existing/)../../).claude-plugin/plugin.json exists therePlatform detection:
{plugin_root}\bin\vernhole.cmd{plugin_root}/bin/vernhole{plugin_root}/bin/vernhole \
--council "<council_name>" \
--output-dir "<output_dir>" \
--context "<context_file>" \
[--llm-mode MODE] \
[--single-llm LLM] \
"<idea>"
--llm-mode mixed_claude_fallback (or omit)--llm-mode mixed_codex_fallback--llm-mode mixed_gemini_fallback--llm-mode mixed_copilot_fallback--single-llm <chosen_llm>--context flag passes the existing output file as additional context to every VernAfter the CLI completes:
synthesis.mdRun VernHole on existing output: $ARGUMENTS