From deep-thought
Strategic planning with auto-calibrated detail, decision rationale, and dependency ordering. Use when starting a new feature, bug fix, refactor, or any non-trivial work. Produces a plan document with tasks, reasoning, and acceptance criteria. Triggers: plan, planning, create plan, implementation plan, feature plan, work plan.
npx claudepluginhub ondrej-svec/heart-of-gold-toolkit --plugin deep-thoughtThis skill is limited to using the following tools:
Strategic planning that fits the problem. Answers **HOW** to build what was decided in `/brainstorm` (or from scratch for clear requirements).
Creates structured plans for multi-step tasks including software features, implementations, research, or projects. Deepens plans via interactive sub-agent reviews.
Creates structured plans for multi-step tasks like software features, research workflows, events, or study plans. Deepens existing plans with interactive sub-agent reviews.
Creates detailed implementation plans through interactive collaboration for features, refactoring, or tasks. Auto-activates when planning work items.
Share bugs, ideas, or general feedback.
Strategic planning that fits the problem. Answers HOW to build what was decided in /brainstorm (or from scratch for clear requirements).
This skill MAY: research (read-only), analyze codebase patterns, write the plan document. This skill MAY NOT: edit code, create files beyond the plan document, run tests, deploy, implement anything.
NEVER write code during this skill. Research and plan only.
| Shortcut | Why It Fails | The Cost |
|---|---|---|
| "Skip research — I already know the codebase" | You know YOUR mental model. The codebase may have changed. | Plan conflicts with existing code → rework |
| "Skip decision rationale — the approach is obvious" | Obvious to you, now. Not to the person executing in 2 weeks. | Decisions get questioned, re-litigated, or silently reversed |
| "Make every task detailed — more detail is better" | Over-specified plans are brittle. They break on first contact with reality. | Plan becomes a constraint instead of a guide |
| "Skip risk analysis — it's low risk" | The risks you don't name are the ones that surprise you. | Unmitigated risk → emergency debugging |
Entry: User has a topic, brainstorm path, or feature description.
If the user provided a brainstorm path:
If the user provided a topic but no brainstorm:
docs/brainstorms/ (or project override path) for a recent match (last 14 days, semantic match on filename/frontmatter)None — proceed without brainstorm contextExit: Context understood — brainstorm consumed (if exists), scope clear enough to research.
Entry: No brainstorm exists. User provided a topic or description.
Ask clarifying questions one at a time, not a questionnaire. Prefer the harness's structured question UI when available; otherwise ask plainly in text and wait for the answer before continuing:
Prefer multiple-choice questions when natural options exist. Continue until the scope is clear OR user says "proceed."
Exit: Scope understood well enough to research.
Entry: Context detected (Phase 0) or idea refined (Phase 1).
Launch research in parallel:
Surface past solutions:
>> Known pattern: docs/solutions/auth/jwt-refresh-fix.md (high match)
>> Existing code: services/scoring-engine/composite.py (similar pattern)
Research decision for external sources:
If external research is needed: Announce the decision and proceed: "This involves payment processing — researching current best practices before planning."
Search past plans for similar features. Surface proven patterns and past risks:
>> Prior plan: docs/plans/YYYY-MM-DD-feat-x.md (similar scope)See ../knowledge/active-memory-integration.md for retrieval patterns.
If the task is design-heavy, copy-heavy, or boundary-sensitive, also search for:
Surface both positive models and anti-models. Strong autonomous planning needs references and anti-references, not just similar code.
Exit: Codebase patterns known, past solutions surfaced, constraints identified.
After calibrating detail level, decide whether to proceed autonomously or interactively:
See ../knowledge/autonomy-modes.md for detection heuristics.
Entry: Research complete.
Auto-calibrate based on complexity. Don't ask the user — assess it, then tell them.
CONCISE — when:
- Scope is clear and bounded
- One person, one day or less
- Low risk (no auth, scoring, data, money)
- Clear precedent in codebase
STANDARD — when:
- Multi-day work but clear approach
- Some risk or new patterns needed
- Decision rationale adds value
DETAILED — when:
- Multi-repo or multi-team
- Unclear approach, multiple valid options
- High risk (auth, scoring, data, money, migrations)
- Significant architectural decisions
Tell the user: "This is a [level] plan — [reason]. Let me know if you want more or less detail."
Exit: Detail level chosen and communicated.
Entry: Detail level set, research findings available.
Check the project's CLAUDE.md for a "Toolkit Output Paths" table. Use those paths if present, otherwise use defaults.
Output path: {plans_path}/YYYY-MM-DD-{type}-{kebab-topic}-plan.md
(Default plans_path: docs/plans/)
Types: feat, fix, refactor, chore, docs
YAML frontmatter:
---
title: "{type}: {description}"
type: plan
date: YYYY-MM-DD
status: approved
brainstorm: {path if exists}
confidence: high | medium | low
---
All plans include:
/work.Standard and detailed plans also include: 8. Decision Rationale — Why this approach? Alternatives considered? Tradeoffs? 9. Constraints and Boundaries — Architectural, editorial, release, privacy, or operating rules that stay fixed 10. Assumptions — What must be true for this plan to work? (see Assumption Audit below) 11. Risk Analysis — What could go wrong? How do we mitigate it?
Detailed plans also include: 12. Phased Implementation — Phases with exit criteria per phase 13. References — Links to brainstorm, relevant code, past solutions
For subjective or boundary-sensitive plans, also include:
work startsConfidence calibration (stated in frontmatter and body):
/brainstorm firstBefore finalizing, identify the assumptions the plan depends on and run the Recursive Why loop on each.
Process:
Extract assumptions from the proposed solution and task list. Look for:
For each assumption, ask "Why do we believe this?" → loop the answer:
Assumption: "The scoring engine can handle async batch processing"
→ Why? "Because it's stateless"
→ Why does statelessness guarantee batch support? "Because... it doesn't. We'd need to verify queue handling."
→ STOP: Unverified.
Write the Assumptions section in the plan:
## Assumptions
| Assumption | Status | Evidence |
|------------|--------|----------|
| PostgreSQL handles 10k concurrent reads | Verified | Load test from Q1 (docs/solutions/perf/db-load-test.md) |
| Users provide email during onboarding | Verified | Required field in registration flow |
| Scoring engine supports async batch | Unverified | Needs investigation before Phase 2 |
| Feature flag service handles gradual rollout | Verified | Used in 3 prior features (flagd config) |
Unverified assumptions automatically become either:
Depth: 2-3 levels per assumption. If the brainstorm already ran an Assumption Audit, inherit its findings — don't repeat the work, just verify nothing changed.
See ../knowledge/discovery-patterns.md → "Recursive Why" for the loop technique.
If the task materially changes UI, copy, information architecture, facilitation flow, or a trust boundary judged partly by human review, the plan must additionally include:
For design-heavy tasks, autonomous work should not start until at least one preview artifact exists. Acceptable preview forms include an HTML/static mockup, a terminal-friendly structural preview, or another concrete representation that makes drift obvious before implementation. The plan should also say who reviews the preview and what failure sends the work back to planning.
Exit: Plan document written.
Entry: Plan written and saved.
Ask the user what to do next.
/work {plan-path}If user selects "Start /work": Suggest running /work {plan-path}.
If user selects "Visualize / Share": Run /babel-fish:visualize {plan-path} and try the shareable HTML flow first when share-html is configured. Otherwise render the terminal mind map. After rendering or sharing, return to this handoff with the remaining options.
If user selects "Review and refine": Accept feedback, update the plan, then present these options again.
If user selects "Done for now": Confirm the path.
Before delivering the plan, verify:
/work from this plan without clarifying questions/work. Clear enough that execution requires no guessing.../knowledge/decision-frameworks.md — How to evaluate tradeoffs../knowledge/strategic-decomposition.md — How to break work into dependency-ordered steps