npx claudepluginhub harshilmathur/autodecision --plugin autodecision# /autodecision Run the full Auto-Decision Engine: 10 phases, 5-persona council, iterative refinement until convergence. ## ⚠️ MUST RUN INLINE — NOT AS A SUBAGENT **This skill MUST execute in the main conversation, NOT inside a spawned agent.** If you are a subagent (spawned by another agent or orchestrator), you CANNOT run the full loop properly because the Agent tool is unavailable in nested contexts. The 5-persona council requires spawning parallel subagents — which only works from the main conversation. **Before proceeding, check:** Am I the main conversation or a subagent? - If mai...
/war-roomConvenes multi-LLM expert panel to pressure-test strategic decisions via reversibility assessment, adversarial review, and deliberation modes, producing synthesized recommendation.
/councilAssembles AI agent council with dynamic personas to generate parallel opinions on a query, anonymously rank them, and synthesize weighted final response.
/decision-tree-explorerExplores decision branches with probability weighting, expected value analysis, and scenario-based optimization.
/boardSimulates a personal advisory board with Intelligence, Business, Life, Security councils for life/business decisions. Persists outcomes in SQLite db. Supports ask, quick, review, history, followup subcommands.
/polyclaudeAnalyzes questions, plans, or ideas from 2-6 configurable AI perspectives in parallel (flags: --quick|--full|--deep|--include|--exclude), producing a synthesized Council Report.
/decision-quality-analyzerAnalyzes team decision quality via scenario testing, bias detection, and process optimization using specified criteria.
Share bugs, ideas, or general feedback.
Run the full Auto-Decision Engine: 10 phases, 5-persona council, iterative refinement until convergence.
This skill MUST execute in the main conversation, NOT inside a spawned agent. If you are a subagent (spawned by another agent or orchestrator), you CANNOT run the full loop properly because the Agent tool is unavailable in nested contexts. The 5-persona council requires spawning parallel subagents — which only works from the main conversation.
Before proceeding, check: Am I the main conversation or a subagent?
references/engine-protocol.md "If Agent tool is
unavailable" section and ask the user before continuing./autodecision "Should we cut pricing by 20%?"
/autodecision --template pricing "Should we cut pricing by 20%?"
Do NOT delegate the full loop to a single agent. Follow each phase step by step in THIS conversation. Spawn agents only for parallelizable tasks (personas, critique, adversary). If you spawn one agent to "run everything," that agent can't spawn grandchild agents and the personas collapse into sequential authoring — destroying the council's value.
The correct pattern:
.claude/skills/autodecision/SKILL.mdreferences/engine-protocol.md for the full loop protocolreferences/persona-council.md. YOU spawn them directly.~/.autodecision/runs/{decision-slug}/Read these before you reach Phase 8. The writer has historically shipped structurally broken briefs by improvising here. Two hard rules:
Before writing ONE line of DECISION-BRIEF.md:
references/brief-schema.json.full for default /autodecision; medium for --iterations 1; quick for :quick; full for revise runs), enumerate every header whose required_in contains that mode, plus all required subsections.## Context, ## Decision tilt, ## The possibility map, ## Methodology, ## Analysis Approach,
## Bottom Line, ## Summary, ## Adversary Findings — all HARD_FAIL. The schema IS the
readability contract.**Action:**, **Confidence:**, **Confidence reasoning:**, **Depends on:**,
**Monitor:**, **Pre-mortem:**, **Review trigger:** are required.Full protocol: references/phases/decide.md Step 4a.
Phase 8.5 is python3 "{skill_dir}/scripts/validate-brief.py" --run-dir "{run_dir}" --schema "{skill_dir}/references/brief-schema.json" --mode "{mode}".
python3 --version fails, run the fallback self-check from phases/decide.md Step 5.5 (re-read the brief, verify every header from the Step 4a checklist is present, add the structural-self-check footer).Forbidden: writing an inline Python script in a Bash heredoc that checks for headers
you just authored and declaring "N/N passed." That is self-certification, not
validation. A writer that invents its own headers and then writes a validator checking
for those same invented headers will always pass. The validator exists precisely to
catch the case where the writer drifted from the schema. If your inline script is
checking for ## Context or Decision tilt or possibility map — none of which are
in the schema — you built a validator for your own mistake.
Full protocol: references/phases/validate-brief.md.
references/effects-chain-spec.md before Phase 3 for JSON schemasreferences/persona-council.md before spawning subagentsreferences/validation.md for output validation rulesreferences/output-format.md before Phase 8references/brief-schema.json before Phase 8 — it is the canonical structure