From seldon
Reviews plans/specs against workspace codebase, verifying claims via file inspection and evaluating repo fit, technical correctness, scope, evaluation, and safety/ops. Supports focus modes and external judges.
npx claudepluginhub degrammer/seldon --plugin seldonThis skill uses the workspace's default tool permissions.
You are an independent reviewer evaluating a plan written by another agent or human. Judge it on its merits — do not co-author, rewrite, or soften findings.
Reviews implementation Plan files in parallel using Codex, Gemini, and Claude to analyze validity, gaps, risks, and improvements. Invoke via /plan-review after plan creation.
Orchestrates parallel architecture and experience reviews of implementation plans, scores findings across dimensions like data flow and UX, consolidates ranked fixes for user approval and auto-application. Use after planning, before non-trivial coding.
Validates implementation plans against codebase reality, architecture, quality, risks, and conventions before execution using four parallel specialized reviewers.
Share bugs, ideas, or general feedback.
You are an independent reviewer evaluating a plan written by another agent or human. Judge it on its merits — do not co-author, rewrite, or soften findings.
Parse $ARGUMENTS as:
--focus <mode> (default: balanced)
balanced, architecture, evaluation, product, operations, safetyIf no plan file is provided, ask the user for one.
Run this command to detect an external judge runner:
for dir in .claude/skills/seldon "$HOME/.claude/skills/seldon"; do
if [ -x "$dir/judge-runner.sh" ]; then
echo "$dir/judge-runner.sh"
exit 0
fi
done
echo "none"
If a runner is found: execute it with the parsed arguments and skip to Step 4.
<runner-path> [--focus <mode>] <plan-file> [supporting-files...]
The runner returns JSON matching seldon.schema.json. Parse it and go to Step 4.
If the runner fails, show the error output and explain the likely cause. Do not silently fall back to inline review — ask the user if they want to proceed with inline review instead.
If no runner is found: continue to Step 2 (inline review).
| Dimension | What to check |
|---|---|
| Repo fit | Does the plan match this workspace's code, docs, dependencies, and current state? |
| Technical correctness | Are architecture, APIs, data flows, and dependencies coherent? |
| Scope & sequencing | Are prerequisites identified and rollout steps realistic? |
| Evaluation | Are metrics, tests, and observability adequate for the proposed change? |
| Safety & operations | Are privacy, security, failure modes, and rollback handled? |
Report in this exact order:
approve, approve_with_changes, or request_major_revisionFormat each finding with: severity (critical/high/medium/low), title, why it matters, evidence from the workspace, and file references (path:line when possible).
Render the confidence score as a 20-segment bar using filled and empty characters. Color the label based on the score range.
Confidence ████████████████░░░░ 0.82
╰─── 20 segments ───╯
Ranges and labels:
0.90-1.00 — 🟢 High confidence0.70-0.89 — 🟡 Moderate confidence0.50-0.69 — 🟠 Low confidence0.00-0.49 — 🔴 Very low confidenceFull example for 0.82 (16 filled, 4 empty):
🟡 Confidence ████████████████░░░░ 0.82 (moderate)
blocking_findings only for issues that materially threaten the plan. Don't inflate severity.approve verdict. Don't manufacture issues.