From deep-thought
Detective-style investigation that follows evidence trails to find root causes, bugs, inconsistencies, and hidden problems. Works on code, performance, architecture, data, and systems. Three investigative lenses: Sherlock (deduction), Poirot (psychology/intent), Columbo (what's missing). Triggers: investigate, debug, detective, find bug, root cause, what's wrong, diagnose, trace, why is this broken, what happened.
npx claudepluginhub ondrej-svec/heart-of-gold-toolkit --plugin deep-thoughtThis skill is limited to using the following tools:
Detective-style investigation. Follow evidence trails, notice what's missing, connect what others overlook.
Investigates bugs via 4-phase workflow (root cause, pattern comparison, hypothesis testing, fix implementation) or audits codebases for hidden issues. Outputs diagnosis reports and optional fix commits.
Conducts structured root cause investigations for bugs, errors, and unexpected behavior using module freeze, evidence collection, and testable hypotheses before fixes.
Enforces systematic root cause analysis for bugs, test failures, unexpected behavior, and regressions via five-phase workflow: Understand, Reproduce, Isolate, Fix, Verify.
Share bugs, ideas, or general feedback.
Detective-style investigation. Follow evidence trails, notice what's missing, connect what others overlook.
This skill MAY: read code, run diagnostic commands (read-only), trace evidence, present findings. This skill MAY NOT: edit code, fix issues, create PRs, deploy. The only Bash allowed is read-only diagnostics (git log, curl for status, kubectl get, etc.).
This is an investigation, not a fix. Present the case — the user decides what to do.
Three investigative lenses — reasoning frameworks that each unlock a different class of problems:
Sherlock Holmes — Deductive elimination. What MUST be true? Verify each premise. Catches: logic errors, broken invariants, impossible states, type mismatches.
Hercule Poirot — Psychological method. Study intent and mental model. The gap between what someone THOUGHT the system does and what it ACTUALLY does. Catches: misunderstood APIs, wrong assumptions, subtle misreads.
Columbo — Persistent nagging. Something doesn't sit right. "Just one more thing..." Catches what's MISSING — error handlers, edge cases, tests, cleanup.
| Shortcut | Why It Fails | The Cost |
|---|---|---|
| "Jump to Phase 5 — I already know the answer" | Skipping evidence gathering confirms biases, not bugs | Wrong diagnosis → wrong fix → problem persists |
| "This code looks fine, move on" | Dangerous bugs LOOK correct | The bug you didn't investigate ships |
| "Not my scope — skip it" | Evidence trails cross boundaries | Surface symptom fixed, root cause remains |
| "The tests pass, so it's correct" | Tests test what the author THOUGHT, not what it ACTUALLY does | False confidence |
Entry: User described a problem, pointed at code, or said "something's wrong."
Auto-detect polarity from context:
| Signals | Polarity | Focus |
|---|---|---|
| Diff, file path, error in code | Code | Bugs, logic errors, type mismatches |
| "Slow", latency, timeouts | Performance | Queries, memory, bottlenecks |
| "Is this the right structure" | Architecture | Patterns, coupling, design drift |
| "Data doesn't look right" | Data | Integrity, schema, migration risks |
| "Something is broken in prod" | System | Infrastructure, networking, runtime |
If unclear: Use AskUserQuestion with:
Auto-load relevant knowledge:
.py → Read ../knowledge/python-fastapi-patterns.md.ts/.tsx → Read ../knowledge/typescript-nextjs-patterns.md../knowledge/infrastructure-ops.md + ../knowledge/observability.md../knowledge/observability.md + relevant stack knowledge../knowledge/security-review.mdAlso: Search the project's docs/operators/runbooks/ for matching runbooks when investigating system issues.
Exit: Polarity determined, knowledge loaded, evidence available.
"I do not leap to the conclusions. First, I observe."
Entry: Evidence available (diff, files, error logs, system description).
Understand the full picture before analyzing:
Before investigating, search docs/solutions/ for matching symptoms or component names:
>> Known pattern: docs/solutions/{domain}/{topic}.md (high match) — verify if it applies hereIn autonomous mode: follow the evidence chain to its conclusion without intermediate check-ins. Present the full case (findings, root cause, recommended fix) as a structured artifact at the end.
See ../knowledge/active-memory-integration.md for retrieval patterns.
Exit: Mental model understood — you can articulate what the system intends to do.
"It is a capital mistake to theorize before one has data."
Entry: Mental model from Phase 1.
Apply deductive reasoning:
Follow the trail. When something catches your eye, trace it:
Eliminate the impossible: if a value can be null here and there's no null check, that's not suspicion — it's a deduction.
Exit: All premises listed and verified; trails followed to resolution or dead end.
"Every witness tells you something — even when they lie."
Entry: Trails identified from Phase 2.
Study surrounding context:
Exit: Tests, types, and callers reviewed — story is consistent or contradictions documented.
"Oh, I'm sorry to bother you again, but there's just one more thing..."
Entry: Core analysis complete.
The most important phase. Look for what's MISSING:
Keep nagging. The thing that seems minor is often the whole case.
Exit: "Missing things" catalog complete — every absence documented with a failure scenario.
"The game is afoot."
Entry: All phases 1-4 complete.
Synthesize. Problems hide at intersections:
Each finding must survive the Holmes test: given the evidence, is there any other explanation?
Exit: Findings synthesized with evidence.
Entry: Findings synthesized.
## Case: [Brief title]
### Scene Assessment
[2-3 sentences: what's happening, the mental model, initial impression]
### Findings
#### [CONCLUSIVE] Finding title
**Evidence:** [location] — [quote the evidence]
**Deduction:** [Why this IS a problem, with concrete failure scenario]
**Impact:** [What happens when this fails]
#### [SUSPICIOUS] Finding title
**Evidence:** [location] — [quote the evidence]
**Deduction:** [Why this looks wrong, what could go wrong]
**Recommendation:** [What to verify or fix]
#### [INVESTIGATE] Finding title
**Evidence:** [location] — [quote the evidence]
**Concern:** [What might be wrong but can't be proven from here]
**Question:** [What should be verified]
### Just One More Thing...
[Columbo's parting observations — what SHOULD be here but isn't.
Each with a concrete scenario of what goes wrong.]
### Case Summary
**Verdict:** CLEAN / MINOR CONCERNS / BUGS FOUND / CRITICAL ISSUES
**Confidence:** [How thoroughly investigated given the scope]
[1-2 sentence overall assessment]
Exit: Case report presented.
Entry: Case report presented.
Use AskUserQuestion with:
If user selects "Dig deeper": Use AskUserQuestion (header: "Finding", question: "Which finding to dig into?") with each finding as an option. Then return to Phase 2 focused on that trail.
Before delivering the case report, verify:
../knowledge/critical-evaluation.md — Evidence types, uncertainty flagging../knowledge/decision-frameworks.md — Prioritizing investigation depth