From deep-thought
Deep architectural review of a platform or product — cross-references code against claims, maps failure modes, evaluates scaling bottlenecks, and produces a decision-grade handoff document with ADRs. Use when: reviewing an existing system for scaling readiness, performing a CTO handoff, evaluating platform architecture for enterprise readiness, or auditing a codebase before a major migration. Triggers: architecture review, scaling review, platform review, CTO handoff, system audit, scaling analysis, architecture assessment, production readiness.
npx claudepluginhub ondrej-svec/heart-of-gold-toolkit --plugin deep-thoughtThis skill is limited to using the following tools:
Deep, evidence-based architectural review that produces a decision-grade document. Not a checklist pass — a systematic investigation that cross-references every claim against actual code, maps failure modes through the full lifecycle, and evaluates whether the architecture can support its intended scale.
Delivers systematic architecture reviews across 7 dimensions with scored reports, highlighting risks and recommendations.
Analyzes any codebase's architecture with 6 specialist agents (perf/scale, reliability, security, ops/DX, data/deps + Codex cross-review). Agents debate risks, fragile spots, improvements for audits/refactors.
Reviews architecture of written plans: scores data flow, failure modes, edge cases, test matrix, rollback safety (0-10 each) with citations; produces ranked fixes.
Share bugs, ideas, or general feedback.
Deep, evidence-based architectural review that produces a decision-grade document. Not a checklist pass — a systematic investigation that cross-references every claim against actual code, maps failure modes through the full lifecycle, and evaluates whether the architecture can support its intended scale.
This skill MAY: read code, analyze architecture, research platforms, write review documents, ask architectural decision questions via AskUserQuestion. This skill MAY NOT: modify application code, deploy changes, or make architectural changes. It produces a review document — the team decides what to act on.
| Shortcut | Why It Fails | The Cost |
|---|---|---|
| "Just summarize the issue/checklist" | Restating what the team already knows adds no value | CTO handoff that tells the team nothing new |
| "The README says X, so X is true" | Code drifts from docs constantly | False claims in the review become false decisions |
| "Recommend best practices without checking constraints" | A 3-person vibe-coding team can't run Kubernetes | Recommendations that are technically correct but impossible to execute |
| "Skip the edge cases — focus on happy path" | Production failures happen on the unhappy path | Architecture that works in demos but breaks under real load |
| "Trust the security audit's line numbers" | Code changes daily; audits are snapshots | Wrong file:line references destroy reviewer credibility |
Entry: User invokes /architecture-review with a target (repo path, issue URL, or description).
| Input | Action |
|---|---|
| GitHub issue URL | Fetch issue body via gh issue view |
| Repo path | Explore codebase structure |
| Description | Clarify scope via AskUserQuestion |
Launch these concurrently:
Codebase explorer (subagent, sonnet): Map the tech stack, directory structure, API routes, database schema, external dependencies, deployment config. Read package.json/requirements.txt, Dockerfile, CI configs, README.md, CLAUDE.md.
Documentation explorer (subagent, sonnet): Find existing architecture docs, security audits, ADRs, brainstorms, plans. Check sibling repos for shared context (GDPR docs, design system, infrastructure).
Issue/checklist analyzer (if issue provided): Parse the issue into discrete concerns. Note what's covered vs. what's missing.
Use AskUserQuestion to clarify:
Exit: Target identified, codebase mapped, team constraints understood.
Run these as parallel subagents to maximize depth without blowing context:
This is the most critical and most often skipped phase. Map the complete user lifecycle:
Exit: Findings from all tracks collected.
For each major architectural choice, present an ADR to the user via AskUserQuestion:
Format each question as:
(Recommended) with clear rationale tied to the team's constraintsCommon ADR categories (adapt to the specific platform):
| ADR | Typical Question |
|---|---|
| Hosting | Where should this run? (serverless vs containers vs hybrid) |
| Database | Managed vs self-hosted? Connection pooling strategy? |
| External API resilience | Fallback strategy for critical third-party dependencies |
| Background processing | Job queue vs synchronous with resilience patterns |
| Observability | What's the minimum viable monitoring stack? |
| Admin tooling | API-first? MCP? CLI? Web UI? |
| Compliance | GDPR docs, DPAs, data deletion — build vs adapt? |
| i18n | When and how to add language support |
| Security priority | Fix before or during migration? |
Principle: fewer components is better. For each recommendation, apply the test: "Can this team set this up with AI tools in under an hour, and will they understand it 3 months later?"
Exit: All ADRs decided by the user.
Write the review document with this structure:
# [Platform Name] — Architecture Review
**Date / Author / Repo / Audience**
## Architecture Decision Records
(Table summarizing all ADRs from Phase 3)
## Executive Summary
(5 key findings, core recommendation table)
## Current Architecture
(Tech stack, data flow, API routes — verified against code)
## Security
(Audit status, unfixed findings, new findings in post-audit code)
## Architecture Bottlenecks
(Each bottleneck with compute profile and recommended fix)
## Edge Cases and Failure Modes
(Drop-off table, critical gap, recommended resilience pattern)
## [Topic-specific sections]
(GDPR, observability, admin, support, i18n — as relevant)
## Execution Order
(Phased plan with task numbers, effort estimates, dependencies)
## Component Summary
(Before/after table, what was deliberately not added)
## Appendices
(Env var inventory, post-audit routes, key file references)
Exit: Document written.
The review document must be verified before shipping. Two approaches (use one or both):
Launch the same review prompt to two different models (e.g., Codex + Opus, or any two available). Compare findings:
Re-read the document with this checklist:
route.ts exportwc -l)process.env across the codebase)Exit: Document verified, fixes applied.
docs/ directory| Anti-Pattern | What to Do Instead |
|---|---|
| Restating the issue checklist with filler | Add value — find what the checklist misses |
| Recommending tools the team can't operate | Match recommendations to team capability |
| Listing 15 monitoring services | Pick the minimum stack that covers the gaps |
| Writing "best practice" without checking the code | Verify every claim against the actual implementation |
| Ignoring failure modes | Map the full lifecycle — production breaks on the unhappy path |
| Single-provider fallback (same vendor, different model) | Cross-provider fallback (different vendors entirely) |
| Treating the security audit as current truth | Audits are snapshots — code changes daily, verify everything |