From the-crucible
Multi-agent code quality evaluation — orchestrates specialized review agents across naming, complexity, tests, security, and documentation dimensions. Invoke whenever task involves any interaction with code quality — reviewing implementations, evaluating pull requests, or assessing code before commits.
npx claudepluginhub xobotyi/cc-foundry --plugin the-crucibleThis skill uses the workspace's default tool permissions.
Orchestrate 8 specialized teammate agents for comprehensive code quality evaluation. Each agent is a plugin subagent —
Guides strict Test-Driven Development (TDD): write failing tests first for features, bugfixes, refactors before any production code. Enforces red-green-refactor cycle.
Guides systematic root cause investigation for bugs, test failures, unexpected behavior, performance issues, and build failures before proposing fixes.
Guides A/B test setup with mandatory gates for hypothesis validation, metrics definition, sample size calculation, and execution readiness checks.
Orchestrate 8 specialized teammate agents for comprehensive code quality evaluation. Each agent is a plugin subagent — spawn them as teammates and aggregate their findings.
All agents are read-only — they analyze code and report findings via SendMessage.
the-crucible:namer — Naming: misleading, vague, type-encoded, scope-mismatched identifiersthe-crucible:complexity-reviewer — Complexity: nesting, flag arguments, duplication, premature abstractionthe-crucible:comment-reviewer — Comments: noise, staleness, refactoring signals, commented-out codethe-crucible:test-reviewer — Tests: false confidence, implementation coupling, flakiness, coverage gapsthe-crucible:error-handling-reviewer — Errors: silent swallowing, context loss, resource leaks, async error lossthe-crucible:security-reviewer — Security: injection, access control, secrets, crypto, data exposurethe-crucible:observability-reviewer — Observability: logging, metrics, tracing, cardinality, context propagationthe-crucible:docs-auditor — Documentation: missing API docs, stale docs, contract gapsCreate a review team and 8 tasks — one per agent:
TeamCreate(name="code-review")
For each agent in the table above:
TaskCreate(team_name="code-review", title="{agent_name}", description="Review {target}")
Spawn all 8 agents in parallel as teammates using their subagent_type. The agent files define each reviewer's
expertise, patterns, and constraints — the prompt parameter only needs to specify the target and team context:
For each agent:
Agent(
subagent_type="{subagent_type}",
prompt="Review all code in {target}. Send your findings to the leader via SendMessage.",
team_name="code-review",
task_id="{task_id}"
)
As teammates send messages, aggregate findings:
Address findings based on user direction:
Run code quality evaluation on src/auth/
Example interaction flow: