Multi-agent code quality evaluation — orchestrates specialized review agents across naming, complexity, tests, security, and documentation dimensions. Invoke whenever task involves any interaction with code quality — reviewing implementations, evaluating pull requests, or assessing code before commits.
Orchestrates multi-agent code reviews across naming, complexity, tests, security, and documentation.
npx claudepluginhub xobotyi/cc-foundryThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Orchestrate 8 specialized agents for comprehensive code quality evaluation. All agents report findings — you apply fixes.
All agents are recommendation-only — they read code and report, never modify.
Evaluation agents (run in parallel, background):
| Agent | Focus |
|---|---|
| namer | Naming issues: vague, misleading, type-focused identifiers |
| code-simplifier | Complexity: duplication, deep nesting, verbose patterns |
| comment-cleaner | Comment noise, redundancy, missing documentation comments |
| test-reviewer | Test quality, strategy, leanness, locality |
| error-handling-reviewer | Error flow: creation, propagation, handling, silent swallowing |
| security-reviewer | Secrets, injection, input validation, crypto, auth |
| observability-reviewer | Logging, metrics, tracing, context propagation |
Documentation agent (runs last, after fixes):
| Agent | Focus |
|---|---|
| documenter | Missing, outdated, or insufficient API documentation |
Never call the TaskOutput tool during this workflow.
How to wait for background agents:
run_in_background=trueTaskOutput destroys context efficiency for the entire workflow.
Create reports directory inside the target:
mkdir -p {target}/.reviews/{timestamp}
Reports directory must be inside target — agents can only write within their review scope.
Launch all 7 evaluation agents in parallel:
For each agent in [namer, code-simplifier, comment-cleaner,
test-reviewer, error-handling-reviewer,
security-reviewer, observability-reviewer]:
Task(agent, prompt="""
Evaluate {target} and report findings.
Write findings to: {reports_dir}/{agent}.md
Do NOT modify any code — only report what should change.
Do NOT include file contents in the report — only findings.
""", run_in_background=true)
<agent-assumptions>
Communicate these to every agent:
- All tools are functional. Do not make exploratory calls.
- Only call a tool if required to complete the task.
- Focus on high-signal findings. False positives erode trust.
</agent-assumptions>
After all evaluation agents complete:
{reports_dir}/Address findings based on user direction:
This phase is iterative — fix, verify, repeat.
Only after code fixes are complete, run documenter:
Task(documenter, prompt="""
Evaluate API documentation in {target}.
Write findings to: {reports_dir}/documenter.md
Do NOT modify any code — only report what documentation is missing.
Do NOT include file contents in the report — only findings.
""", run_in_background=true)
Documentation reviews the fixed code, not the original state. Running it earlier wastes effort on outdated code.
After all work complete:
rm -rf {reports_dir}
Remove the .reviews/ directory — reports are temporary artifacts.
Run code quality evaluation on [path]