From galeharness-cli
Review requirements or plan documents using parallel persona agents that surface role-specific issues. Use when a requirements document or plan document exists and the user wants to improve it.
npx claudepluginhub wangrenzhu-ola/galeharnesscodingcli --plugin galeharness-cliThis skill uses the workspace's default tool permissions.
Review requirements or plan documents through multi-persona analysis. Dispatches specialized reviewer agents in parallel, auto-applies `safe_auto` fixes, and routes remaining findings through a four-option interaction (per-finding walk-through, LFG, Append-to-Open-Questions, Report-only) for user decision.
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Review requirements or plan documents through multi-persona analysis. Dispatches specialized reviewer agents in parallel, auto-applies safe_auto fixes, and routes remaining findings through a four-option interaction (per-finding walk-through, LFG, Append-to-Open-Questions, Report-only) for user decision.
Config:
At the start of execution, use your native file-read tool to read .compound-engineering/config.local.yaml from the repository root. If the file is missing in the current worktree, check the main repository root (the parent of .git/worktrees). If the file is missing or unreadable, do not block the workflow — proceed silently with default settings.
If the config file contains language: en, write findings in English.
If the file is missing, contains language: zh-CN, or has no language key, write findings in Chinese (default).
AskUserQuestion is a deferred tool — its schema is not available at session start. At the start of Interactive-mode work (before the routing question, per-finding walk-through questions, bulk-preview Proceed/Cancel, and Phase 5 terminal question), call ToolSearch with query select:AskUserQuestion to load the schema. Load it once, eagerly, at the top of the Interactive flow — do not wait for the first question site. On Codex and Gemini this preload is not required.ToolSearch returns no match, the tool call explicitly fails, or the runtime mode does not expose it (e.g., Codex edit modes where request_user_input is unavailable). A pending schema load is not a fallback trigger; call ToolSearch first per the pre-load rule. In genuine-fallback cases, present options as a numbered list and wait for the user's reply — never silently skip the question. Rendering a question as narrative text because the tool feels inconvenient, because the model is in report-formatting mode, or because the instruction was buried in a long skill is a bug. A question that calls for a user decision must either fire the tool or fall back loudly.Check the skill arguments for mode:headless. Arguments may contain a document path, mode:headless, or both. Tokens starting with mode: are flags, not file paths — strip them from the arguments and use the remaining token (if any) as the document path for Phase 1.
If mode:headless is present, set headless mode for the rest of the workflow.
Headless mode changes the interaction model, not the classification boundaries. document-review still applies the same judgment about which tier each finding belongs in. The only difference is how non-safe_auto findings are delivered:
safe_auto fixes are applied silently (same as interactive)gated_auto, manual, and FYI findings are returned as structured text for the caller to handle — no AskUserQuestion prompts, no interactive routingThe caller receives findings with their original classifications intact and decides what to do with them.
Callers invoke headless mode by including mode:headless in the skill arguments, e.g.:
Skill("document-review", "mode:headless docs/plans/my-plan.md")
If mode:headless is not present, the skill runs in its default interactive mode with the routing question, walk-through, and bulk-preview behaviors documented in references/walkthrough.md and references/bulk-preview.md.
If a document path is provided: Read it, then proceed.
If no document is specified (interactive mode): Ask which document to review, or find the most recent in docs/brainstorms/ or docs/plans/ using a file-search/glob tool (e.g., Glob in Claude Code).
If no document is specified (headless mode): Output "Review failed: headless mode requires a document path. Re-invoke with: Skill("document-review", "mode:headless ")" without dispatching agents.
After reading, classify the document:
docs/brainstorms/, focuses on what to build and whydocs/plans/, focuses on how to build it with implementation detailsAnalyze the document content to determine which conditional personas to activate. Check for these signals:
product-lens -- activate when the document makes challengeable claims about what to build and why, or when the proposed work carries strategic weight beyond the immediate problem. The system's users may be end users, developers, operators, maintainers, or any other audience -- the criteria are domain-agnostic. Check for either leg:
Leg 1 — Premise claims: The document stakes a position on what to build or why that a knowledgeable stakeholder could reasonably challenge -- not merely describing a task or restating known requirements:
Leg 2 — Strategic weight: The proposed work could affect system trajectory, user perception, or competitive positioning, even if the premise is sound:
design-lens -- activate when the document contains:
security-lens -- activate when the document contains:
scope-guardian -- activate when the document contains:
adversarial -- activate when the document contains:
Tell the user which personas will review and why. For conditional personas, include the justification:
Reviewing with:
- ce-coherence-reviewer (always-on)
- ce-feasibility-reviewer (always-on)
- ce-scope-guardian-reviewer -- plan has 12 requirements across 3 priority levels
- ce-security-lens-reviewer -- plan adds API endpoints with auth flow
Always include:
ce-coherence-reviewerce-feasibility-reviewerAdd activated conditional personas:
ce-product-lens-reviewerce-design-lens-reviewerce-security-lens-reviewerce-scope-guardian-reviewerce-adversarial-document-reviewerDispatch all agents in parallel using the platform's task/agent tool (e.g., Agent tool in Claude Code, spawn in Codex). Omit the mode parameter so the user's configured permission settings apply. Each agent receives the prompt built from the subagent template included below with these variables filled:
| Variable | Value |
|---|---|
{persona_file} | Full content of the agent's markdown file |
{schema} | Content of the findings schema included below |
{document_type} | "requirements" or "plan" from Phase 1 classification |
{document_path} | Path to the document |
{document_content} | Full text of the document |
{decision_primer} | Cumulative prior-round decisions in the current session, or an empty <prior-decisions> block on round 1. See "Decision primer" below. |
Pass each agent the full document — do not split into sections.
On round 1 (no prior decisions), set {decision_primer} to:
<prior-decisions>
Round 1 — no prior decisions.
</prior-decisions>
On round 2+ (after one or more prior rounds in the current interactive session), accumulate prior-round decisions and render them as:
<prior-decisions>
Round 1 — applied (N entries):
- {section}: "{title}" ({reviewer}, {confidence})
Evidence: "{evidence_snippet}"
Round 1 — rejected (M entries):
- {section}: "{title}" — Skipped because {reason}
Evidence: "{evidence_snippet}"
- {section}: "{title}" — Deferred to Open Questions because {reason or "no reason provided"}
Evidence: "{evidence_snippet}"
- {section}: "{title}" — Acknowledged without applying because {reason or "no suggested_fix — user acknowledged"}
Evidence: "{evidence_snippet}"
Round 2 — applied (N entries):
...
</prior-decisions>
Each entry carries an Evidence: line because synthesis R29 (rejected-finding suppression) and R30 (fix-landed verification) both use an evidence-substring overlap check as part of their matching predicate — without the evidence snippet in the primer, the orchestrator cannot compute the >50% overlap test and has to fall back to fingerprint-only matching, which either re-surfaces rejected findings or suppresses too aggressively. The {evidence_snippet} is the first evidence quote from the finding, truncated to the first ~120 characters (preserving whole words at the boundary) and with internal quotes escaped. If a finding has multiple evidence entries, use the first one; the rest live in the run artifact and are not needed for the overlap check.
Accumulate across all rounds in the current session. Skip, Defer, and Acknowledge actions all count as "rejected" for suppression purposes — each signals the user decided the finding wasn't worth actioning this round (Acknowledge is the no-fix-guard variant: the user saw a finding with no suggested_fix, chose not to defer or skip explicitly, and recorded acknowledgement instead; for round-to-round suppression that is semantically equivalent to Skip). Applied findings stay on the applied list so round-N+1 personas can verify fixes landed (see R30 in references/synthesis-and-presentation.md).
Cross-session persistence is out of scope. A new invocation of document-review on the same document starts with a fresh round 1 and no carried primer, even if prior sessions deferred findings into the document's Open Questions section.
Error handling: If an agent fails or times out, proceed with findings from agents that completed. Note the failed agent in the Coverage section. Do not block the entire review on a single agent failure.
Dispatch limit: Even at maximum (7 agents), use parallel dispatch. These are document reviewers with bounded scope reading a single document -- parallel is safe and fast.
After all dispatched agents return, read references/synthesis-and-presentation.md for the synthesis pipeline (validate, anchor-based gate, dedup, cross-persona agreement promotion, resolve contradictions, auto-promotion, route by three tiers with FYI subsection), safe_auto fix application, headless-envelope output, and the handoff to the routing question.
For the four-option routing question and per-finding walk-through (interactive mode), read references/walkthrough.md. For the bulk-action preview used by LFG, Append-to-Open-Questions, and walk-through LFG-the-rest, read references/bulk-preview.md. Do not load these files before agent dispatch completes.
@./references/subagent-template.md
@./references/findings-schema.json