From arxitect
Orchestrates parallel reviews of object-oriented design, clean architecture, and API design across identified code files. Delivers unified verdict and structured findings.
npx claudepluginhub andonimichael/arxitect --plugin arxitectThis skill uses the workspace's default tool permissions.
You are orchestrating a read-only architecture review. Three independent
Reviews code for clean architecture compliance, evaluating cohesion (REP, CRP, CCP), coupling (ADP, SDP, SAP), and quality attributes (maintainability, extensibility, testability). Use when assessing new or modified code architecture.
Reviews architecture for coupling, cohesion, SOLID principles, API design, scalability, tech debt. For evaluating proposed designs, existing systems, ADRs, scale-up readiness.
Analyzes any codebase's architecture with 6 specialist agents (perf/scale, reliability, security, ops/DX, data/deps + Codex cross-review). Agents debate risks, fragile spots, improvements for audits/refactors.
Share bugs, ideas, or general feedback.
You are orchestrating a read-only architecture review. Three independent reviewers evaluate code for object-oriented design, clean architecture, and API design quality.
The user's request: $ARGUMENTS
Follow these steps exactly. Do not skip or combine steps.
Determine the files to review from the user's request. If the user specified files or directories, use those. If the request is vague (e.g., "review the auth module"), use Glob and Grep to identify the relevant source files. Exclude test files, configuration files, and generated files unless the user explicitly asks to review them.
Build a file list: one file path per line.
Spawn all three reviewers as sub-agents using the Agent tool. Each reviewer must have read-only access (Read, Glob, Grep only) — it must not modify any files.
If the environment supports parallel agents, dispatch all three in a single message so they run concurrently. If not, dispatch them one at a time in any order.
The reason we are delegating these tasks to specialized sub-agents is because they operate with isolated context and precise instructions that will better allow them to review the code for their lens.
If the environment supports named agent definitions (e.g., Claude Code
subagent_type), use the dedicated reviewer agents below. They are
pre-configured with read-only tools and pre-loaded review skills. Pass only
the file list and any additional context in the prompt.
If not, spawn generic agents and tell each to read its agent-prompt.md
for instructions and reference material. Restrict tool access to Read, Glob,
Grep if the environment supports tool restrictions on agent spawning.
Named agent: subagent_type: "oo-design-reviewer"
Fallback prompt: Tell the agent to read
skills/oo-design-review/agent-prompt.md.
Named agent: subagent_type: "clean-architecture-reviewer"
Fallback prompt: Tell the agent to read
skills/clean-architecture-review/agent-prompt.md.
Named agent: subagent_type: "api-design-reviewer"
Fallback prompt: Tell the agent to read
skills/api-design-review/agent-prompt.md.
Wait for all reviewers to complete. Parse each reviewer's output for its VERDICT and all structured findings. Present a unified report:
## Architecture Review Results
### Overall Verdict: [APPROVED | CHANGES_REQUESTED]
APPROVED only if all three reviewers returned APPROVED.
### Object Oriented Design — [VERDICT]
[Findings in structured format]
### Clean Architecture — [VERDICT]
[Findings in structured format]
### API Design — [VERDICT]
[Findings in structured format]
### Priority Actions
[All CRITICAL findings first, then WARNINGs that appeared across multiple
reviewers or compound across domains.]
agent-prompt.md (fallback).Required skills:
Orchestration files (read by this skill):
skills/architect/review-output-format.md — Structured output
format all reviewers must followAlternative workflow: