From ai-brain-starter
Orchestrates multi-pass coaching sessions for hard conversations, decisions, or tensions; runs iterative panel feedback with corrections, files accountability records, and tracks patterns over time in Obsidian vaults.
npx claudepluginhub adelaidasofia/ai-brain-starterThis skill uses the workspace's default tool permissions.
A skill that turns a one-off hard moment into a tracked accountability arc. Runs panel passes that update with the user's corrections, files a synthesized record with a re-eval date, and updates the vault's rolling pattern tracker so growth can be measured over time.
Conducts conversational interviews for daily journaling, identifies High-Rise emotional floors, and saves formatted entries to Obsidian notes. Invoke via /journal or check-ins.
Provides radically candid coaching as a thinking partner for strategic situations, clarifying stakes, building domain expertise, and strengthening decisions.
Generates structured daily journal entries from AI agent's perspective, capturing projects, wins, frustrations, learnings, and emotions. Useful for diary, journal, or self-reflection requests.
Share bugs, ideas, or general feedback.
A skill that turns a one-off hard moment into a tracked accountability arc. Runs panel passes that update with the user's corrections, files a synthesized record with a re-eval date, and updates the vault's rolling pattern tracker so growth can be measured over time.
Daily journals capture one moment per day. Panel reactions inside /journal are one-shot. Most real coaching, real therapy, real advisor relationships are NOT one-shot — they're multi-turn, they update when new evidence comes in, and they track whether the same blind spot keeps surfacing across months.
The vault has all the pieces (panel rules in ⚙️ Meta/rules/advisory-panel.md, daily journals, decision logs) but no skill that orchestrates the multi-pass coaching arc and files it for tracking. This skill fills that gap.
This skill produces files at three different timescales:
Verbatim raw (immediate) — 📋 Strategy/Coaching Sessions/Processing Notes - YYYY-MM-DD - <topic>.md. The user's exact words during the session. Per the "save exact words" rule, no annotation, no synthesis. Available for re-read forever.
Synthesized accountability record (per session) — 🏠 Home/Coaching Sessions/YYYY-MM-DD - <topic>.md. What surfaced, commitments named, re-eval date one month out. This is the file /weekly and /monthly look at to ask "did the pattern repeat? did the commitments land?"
Rolling pattern aggregator (across sessions) — 🏠 Home/Panel Feedback Log.md. Patterns table at the top tracks mention counts. Single mention = watch. 2+ mentions across different contexts = promote to acute action item. The aggregator is what tells you "this is a real recurring pattern" vs "this was a one-off."
Use /coaching when:
Stay in /journal when:
Ask the user, in their language:
If a transcript already exists in the vault, ask for the path. If not, the user talks through what happened. Either way, capture is the next step.
Decide where the raw goes:
📋 Strategy/Co-founder Syncs/Processing Notes - YYYY-MM-DD - <topic>.md📋 Strategy/Decision Reviews/Processing Notes - YYYY-MM-DD - <topic>.md📋 Strategy/Personal Coaching/Processing Notes - YYYY-MM-DD - <topic>.md📋 Strategy/Coaching Sessions/Processing Notes - YYYY-MM-DD - <topic>.mdIf the user has a transcript already, link to it. If not, capture what they say in their exact words, organized by chronological order or topic. The verbatim file gets numbered sections (1, 2, 3...) with section headers naming the topic, and quoted bodies that are the user's exact words.
Critical rule (carry from the daily-journal skill): Verbatim section bodies are the user's voice only. Panel takes do NOT live in this file. Annotations, synthesis, and commentary live in the synthesized record (Step 6) or the aggregator (Step 7), never inline in the verbatim file.
Read ⚙️ Meta/rules/advisory-panel.md. Convene 3-5 voices most relevant to the triggering event. Apply the rules:
${CLAUDE_EFFORT} is set, scale the panel accordinglyDeliver the panel's read with a clear lead sentence answering the triggering question. Don't bury the answer.
The user will do one or more of:
This is the move that distinguishes coaching from journal. Whatever the user provides, the next pass UPDATES the takes transparently, not silently.
For each new piece of info the user provides:
Keep iterating until the user signals they're ready to synthesize. Common signals: "OK that's enough," "let's wrap this up," "save this," or just shifting to operational mode ("let's track this").
Create 🏠 Home/Coaching Sessions/YYYY-MM-DD - <topic>.md with this structure:
---
creationDate: YYYY-MM-DD
type: coaching-session
session_format: panel-with-claude
duration_approx: <one-pass | multi-pass | multi-pass-Nhr>
triggering_event: "<one-line description>"
themes: [theme1, theme2, ...]
panelists_seated:
- <Name 1>
- <Name 2>
- ...
related: [<wikilinks to verbatim file, source transcripts, decision logs>]
status: open
re_eval_date: YYYY-MM-DD (one month from session)
---
*Brief one-paragraph framing of what this session covered.*
## Triggering event
<2-3 sentences describing what triggered the session, with wikilinks to source material.>
## What surfaced (synthesized across all passes)
### Pattern 1: <Theme name>
**The observation**: <what the panel saw, with citations>
**User's correction (if any)**: <what they pushed back with, what shifted>
**Status**: <1 mention in tracker / demoted / promoted / contextual fact (not a watch pattern)>
### Pattern 2: <next theme>
...
## User's commitments coming out of the session
1. <Concrete behavior commitment, not vague>
2. <Another concrete behavior commitment>
...
## Codified outcomes (rules / files changed this session)
- <Any vault rules added>
- <Any files moved or scrubbed>
- <Any structural changes>
## Re-eval signals (check by <re_eval_date>)
- Did <Pattern 1> show up at least once in real time?
- Did the user act on <commitment 1>?
- Did <Pattern 2> surface in any other context, or stay <person/situation>-specific?
...
## Cross-references
- Verbatim source: <wikilink>
- Source transcripts (if any): <wikilinks>
- Decisions: <wikilinks>
- Pattern tracking: [[Panel Feedback Log]]
Open 🏠 Home/Panel Feedback Log.md. Two updates:
Patterns table at the top — for each NEW pattern this session, add a row with:
For patterns that surfaced this session AND already existed in the table, increment the mention count. If count hits 2+ across different contexts, change the Action to a concrete commitment (promote to acute).
Synthetic Panel Reactions section — append a new entry:
### YYYY-MM-DD — <topic> coaching session
⚠️ **Synthetic panel reactions across one Claude session, NOT real investor feedback. Real-human content lives in "By Meeting" section above.**
**Context:** <one-paragraph framing>
**Panelists:** <list>
**Pass 1:** <what surfaced>
**Pass 2 (if multi-pass):** <how takes updated with user corrections>
...
**Convergence (across all passes):** <high-confidence signals>
**Clash:** <where panelists disagreed>
**Subject-match weighting:** <which panelist's voice carries most weight>
**Collective blind spot (chairman):** <what all panelists missed>
**User's commitments:** <numbered list>
**Re-eval:** YYYY-MM-DD (one month). <Re-eval signals.>
**Entry:** <wikilinks to coaching session record + verbatim>
Tell the user, in plain language:
Then close. Do not pile on additional panel takes. The session is logged. The system now exists to remember.
/journal — one-shot daily check-in with inline panel reaction. If the user starts a journal entry that turns into a multi-pass conversation, suggest switching to /coaching so the work gets tracked./weekly and /monthly — natural surface for re-eval. The weekly review skill should read open Coaching Sessions, surface ones whose re_eval_date has passed, and ask "did the pattern repeat? did the commitments land?"/patterns — pattern detection across MANY sessions / journals. The /patterns skill reads the Panel Feedback Log Patterns table to confirm whether a pattern hits 2+ mentions in different contexts (the promote threshold)./deconstruct — if a coaching session surfaces a stakes: high decision, auto-offer /deconstruct for first-principles analysis before the user commits.After running this skill end-to-end, the vault has:
The user should be able to re-read these in any combination and reconstruct what happened, what the panel said, what corrections updated the takes, what they committed to, and when to check back.