From deep-thought
Focused review of code, documents, or architecture — one deep pass with evidence-based findings and clear verdict. Auto-detects what you're reviewing: branch diff, PR, file path, plan, brainstorm, or spec. One reviewer that reads carefully beats nine that skim. Triggers: review, code review, review PR, review diff, review plan, review brainstorm, review spec, review document, evaluate, check.
npx claudepluginhub ondrej-svec/heart-of-gold-toolkit --plugin deep-thoughtThis skill is limited to using the following tools:
One focused review. Not nine shallow passes — one deep one that reads carefully, evaluates with evidence, and gives a clear verdict.
Runs expert reviews on plans, designs, code changes, or branch diffs via parallel specialized agents (security, performance, architecture). Outputs prioritized Socratic questions.
Reviews specs, code changes, diffs, commits, or PRs for correctness, security, simplicity, robustness, and real tests. Reports prioritized, evidence-based findings from git context or REVIEW.md.
Performs structured code reviews by composing checklists from atoms based on code changes: clean-code always, architecture/DDD/security/tests conditionally. Outputs severity-ordered reports with locations and fixes. For 'review this', 'code review', or PRs.
Share bugs, ideas, or general feedback.
One focused review. Not nine shallow passes — one deep one that reads carefully, evaluates with evidence, and gives a clear verdict.
This skill MAY: read code, analyze diffs, read documents, present findings. This skill MAY NOT: edit code, fix issues, create PRs, push changes, modify any files.
This is a review, not a fix. Present findings — the user decides what to do.
| Shortcut | Why It Fails | The Cost |
|---|---|---|
| "Skim the diff — I'll catch the important stuff" | The one line you skip is the one that matters | Bug in production that a careful read would have caught |
| "No security concerns here" | Auth checks, input validation, and secrets leak through seemingly innocent code | Vulnerability shipped because nobody looked |
| "The tests pass, so the logic is correct" | Tests verify what the author THOUGHT, not edge cases they missed | False confidence → undetected regression |
| "Just style nits — not worth mentioning" | Mixing nits with real findings buries critical issues | Author reads 10 nits, misses the 1 security bug |
Entry: User invoked /review with input (or no input = current branch diff).
Auto-detect based on input:
| Input | Review Type | How |
|---|---|---|
| No arguments | Code — current branch diff | git diff $(git merge-base HEAD main)..HEAD |
File path to .md in docs/plans/ or docs/brainstorms/ | Document | Read and evaluate against document criteria |
| File path to code files | Code — specific files | Read and review those files |
| PR URL or number | Code — PR diff | gh pr diff <number> |
| Directory path | Architecture — structural review | Analyze patterns, conventions, dependencies |
If ambiguous: Use AskUserQuestion (header: "Review type", question: "What are we reviewing?") with options: "Code changes" (description: "Review the diff or specific code files") and "Document" (description: "Evaluate the plan, brainstorm, or spec itself").
Exit: Review type determined — code, document, or architecture.
Entry: Review type known.
Auto-load relevant knowledge based on file types in the diff:
.py files → Read ../knowledge/python-fastapi-patterns.md.ts/.tsx files → Read ../knowledge/typescript-nextjs-patterns.md.tf/.hcl files → Read ../knowledge/infrastructure-ops.md.yaml in k8s/helm paths → Read ../knowledge/infrastructure-ops.md.github/ or Dockerfile → Read ../knowledge/ci-cd-patterns.md../knowledge/security-review.mdLoad the knowledge BEFORE starting the review.
Then gather project context:
Exit: Conventions loaded, risk areas identified, related context read.
Before starting the review, check docs/solutions/ for recurring issues in the same component or domain. If this is a known problem area, note it.
During the review, apply CoVe to each Critical finding before reporting it:
After the review, check: if the same issue has appeared 3+ times across reviews, suggest adding it to the project's CLAUDE.md or docs/solutions/.
In autonomous mode (e.g., review triggered as part of a workflow): complete the full review, present all findings as a structured artifact without intermediate check-ins. Append a decision log if any judgment calls were made.
See ../knowledge/socratic-patterns.md for CoVe technique details.
Entry: Context gathered, diff/document available.
Auto-route to specialized reviewer based on diff content:
| Diff Composition | Reviewer Agent | Why |
|---|---|---|
>70% .py files | Task python-reviewer(diff + conventions) | Python-specific patterns |
>70% .ts/.tsx files | Task typescript-reviewer(diff + conventions) | TypeScript-specific patterns |
| Security-sensitive (auth, secrets, validation) | Task security-reviewer(diff + conventions) | OWASP, threat modeling — in addition to stack reviewer |
Infrastructure (.tf, .yaml, Helm, k8s) | Task infra-reviewer(diff + conventions) | IaC-specific checks |
| Performance-tagged or query-heavy | Task performance-reviewer(diff + conventions) | N+1, complexity, scaling |
| Mixed or unclear | Task strategic-reviewer(diff + conventions) | Generalist — the default |
If unsure which reviewer: Use strategic-reviewer. One good review beats a wrong specialist.
If the diff is large (>500 lines): Focus on the most impactful files first. Read all, but prioritize findings from core logic over boilerplate or generated code.
Read the full document — don't skim. Evaluate against eight criteria:
Analyze the directory/codebase structure:
Exit: Findings documented with evidence and severity.
Entry: Review complete with findings.
## Review: [scope summary]
### Critical Issues (must fix)
- **[CRIT-1]** [file:line] — Description. Evidence. Failure scenario. Fix suggestion.
### Suggestions (consider)
- **[SUG-1]** [file:line] — Description. Tradeoff if ignored.
### Observations (FYI)
- **[OBS-1]** Description.
### Verdict: APPROVE / APPROVE WITH NOTES / REQUEST CHANGES
Every finding needs: [evidence] + [failure scenario]. No evidence = no finding. "This looks off" is not a finding. "This will fail when X because Y" is.
## Document Review: [filename]
### Strengths
- [What's well done — specific, not generic praise]
### Gaps
- [GAP-1] What's missing — why this matters for implementation.
### Suggestions
- [SUG-1] Specific improvement — how this makes the doc more actionable.
### Verdict: READY / NEEDS REFINEMENT
[1-2 sentence summary. If NEEDS REFINEMENT, name the top 1-3 things to fix.]
If no issues found: Say so clearly. Don't invent problems.
Exit: Findings presented in structured format with clear verdict.
Entry: Findings presented.
Use AskUserQuestion with:
If user selects "Discuss a finding": Discuss, then return to this choice.
If user selects "Document insights": Suggest /compound with the specific insight.
After code review, verify:
After document review, verify:
/work start from this?)/compound.../knowledge/critical-evaluation.md — Evidence-based evaluation, uncertainty flagging../agents/strategic-reviewer.md — The default code review agent../knowledge/socratic-patterns.md — CoVe technique for verifying findings../knowledge/active-memory-integration.md — Memory read/write patterns