From thinking
Analyse a session transcript for learnings — corrections, reversals, approach changes, and successes. Runs automatically on SessionStart for the previous session. Invoke manually to analyse the current session or review accumulated learnings.
npx claudepluginhub hpsgd/turtlestack --plugin thinkingThis skill is limited to using the following tools:
Analyse conversation transcripts to extract learnings. Use `$ARGUMENTS` to control what to do:
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Calculates TAM/SAM/SOM using top-down, bottom-up, and value theory methodologies for market sizing, revenue estimation, and startup validation.
Analyse conversation transcripts to extract learnings. Use $ARGUMENTS to control what to do:
current — analyse the current session's transcript (Steps 1–4)latest — analyse the most recent completed session (Steps 1–4)summary — show accumulated metrics and trends (runs generate-metrics.py)patterns — detect recurring patterns across all sessions (runs detect-patterns.py, then Step 5)signals — classify queued ambiguous signals and evolve regex patterns (Step 3 only)full — run everything: analyse latest session, classify signals, detect patterns, generate metrics (Steps 1–6){session-id} — analyse a specific past session (Steps 1–4)This skill is also triggered automatically by the SessionStart hook, which runs the analysis scripts on the previous session. Use this skill manually when you want to run analysis mid-session, review metrics, or trigger pattern detection and rule proposals.
Claude stores transcripts at ~/.claude/projects/-{PATH_HASH}/ where PATH_HASH replaces all non-alphanumeric characters with -. Worktree sessions get a different hash than the main project, so you need to check all of them, including worktrees that have been removed.
# Hash: strip leading /, replace non-alphanumeric with -
path_to_hash() { echo "$1" | sed 's|^/||; s|[^a-zA-Z0-9]|-|g'; }
PROJECTS_DIR="$HOME/.claude/projects"
MAIN_HASH_DIR="$PROJECTS_DIR/-$(path_to_hash "$PWD")"
# Write a breadcrumb so future sessions can find this transcript dir
# even after the worktree is removed
[ -d "$MAIN_HASH_DIR" ] && echo "$PWD" > "$MAIN_HASH_DIR/parent-project"
# 1. Active worktrees (git worktree list)
ALL_DIRS=("$MAIN_HASH_DIR")
for wt in $(git worktree list --porcelain 2>/dev/null | sed -n 's/^worktree //p'); do
wt_dir="$PROJECTS_DIR/-$(path_to_hash "$wt")"
[ "$wt_dir" != "$MAIN_HASH_DIR" ] && ALL_DIRS+=("$wt_dir")
done
# 2. Removed worktrees (breadcrumb scan)
RESOLVED_PWD=$(pwd -P)
for bc in "$PROJECTS_DIR"/*/parent-project; do
[ -f "$bc" ] || continue
bc_dir=$(dirname "$bc")
bc_project=$(cat "$bc")
resolved_bc=$(cd "$bc_project" 2>/dev/null && pwd -P || echo "$bc_project")
[ "$resolved_bc" = "$RESOLVED_PWD" ] && ALL_DIRS+=("$bc_dir")
done
# Filter to existing dirs, deduplicate
TRANSCRIPT_DIRS=()
for d in $(printf '%s\n' "${ALL_DIRS[@]}" | sort -u); do
[ -d "$d" ] && TRANSCRIPT_DIRS+=("$d")
done
For current: find the most recently modified .jsonl file across all $TRANSCRIPT_DIRS.
For a specific session: search all $TRANSCRIPT_DIRS for {session-id}.jsonl.
For summary or patterns: skip to Step 3 or Step 4.
Output: Path to the transcript file, or list of available sessions.
Run the analysis script:
python3 ${CLAUDE_PLUGIN_ROOT}/scripts/analyse-session.py <transcript.jsonl> \
--project-dir .claude/learnings \
--global-dir ~/.claude/learnings \
--json
The script outputs structured JSON with:
Read the JSON output and present the results.
Output: Metrics summary table and event list.
The UserPromptSubmit hook captures every user message and classifies obvious cases via regex. Messages that are ambiguous are queued as "type": "unclassified" in .claude/learnings/signals/pending.jsonl.
Read the pending signals file:
cat .claude/learnings/signals/pending.jsonl
For each unclassified signal, read the prompt_preview and classify it yourself:
Update the signal's type and rating fields, and set confidence to "claude". Write the updated signals back.
This is why the system doesn't need an external AI API — you ARE the AI doing the classification.
For each message you classified that the regex missed, extract a pattern that would catch similar messages in the future and add it to .claude/learnings/signals/patterns.json.
The patterns file has this structure:
{
"correction": ["\\bfeels (arbitrary|wrong|off)\\b", "\\bunderestimating\\b"],
"praise": [],
"approach_change": ["\\bi think (we|you) should\\b"],
"not_correction": []
}
For each unclassified message you classified:
Read the existing patterns.json (create if missing), append new patterns, and write it back. The classify-message.py script loads this file on every run, so new patterns take effect immediately.
Rules for pattern generation:
\bfeels (arbitrary|wrong|off|unnecessary)\b is good — catches a class of pushback\bfeels\b alone is too broad — would match "it feels responsive"\b word boundaries to prevent partial matchesOutput: Classified signal count by type + number of new patterns added.
For each event extracted by the script AND each classified signal, interpret it into a learning:
Ask: What did the assistant do wrong, and what should it do differently?
### Learning: [short title]
**Type:** immediate_correction
**What happened:** [assistant did X]
**What was wrong:** [why X was incorrect — the user's actual intent was Y]
**Rule:** [imperative statement — "Always X" or "Never Y"]
**Scope:** [universal — applies to all projects | project-specific — only this codebase]
Ask: Was this a genuine mistake that was corrected, or normal iterative refinement?
Files touched 3+ times are flagged, but not all are problems. A file edited 5 times during a planned multi-pass refactor is normal. A file written then immediately rewritten with a different approach is a reversal.
Check git log for the file to distinguish iterative refinement from reversals.
Ask: What did the assistant do right that should be reinforced?
Only record successes that are non-obvious — approaches that worked but might not be the default choice. "Wrote code that compiled" is not worth recording. "Used a single bundled PR instead of splitting, and the user confirmed that was the right call" is.
For every HIGH severity correction and every recurring pattern, write a learned rule to .claude/rules/ so it takes effect immediately (next session, or even mid-session if Claude Code hot-reloads rules).
# Check existing learned rules
ls .claude/rules/learned--*.md 2>/dev/null
Write the rule file:
---
description: "[one-line description of what this rule prevents]"
alwaysApply: true
---
# Learned: [short title]
[imperative rule statement]
**Why:** [what went wrong without this rule — the specific incident]
**Evidence:** [session ID, date, correction summary]
File naming: .claude/rules/learned--{kebab-case-topic}.md
Examples:
.claude/rules/learned--verify-before-declaring-complete.md.claude/rules/learned--route-multi-agent-through-coordinator.md.claude/rules/learned--check-installed-plugins-not-cache.mdScope determines location:
.claude/rules/learned--{topic}.md (project root)~/.claude/rules/learned--{topic}.md (global, applies to all projects)Rules for writing learned rules:
Output: Classified learnings with rules and scope + list of learned rule files written.
patterns mode)Read all session analysis files from .claude/learnings/sessions/ and ~/.claude/learnings/sessions/.
# Count learnings by type across all sessions
find .claude/learnings/sessions ~/.claude/learnings/sessions -name '*.json' 2>/dev/null
For each learning type, count occurrences across sessions. When the same kind of correction appears 3+ times:
### Pattern detected: [name]
**Instances:** [count] across [N] sessions
**First seen:** [date]
**Last seen:** [date]
**Common thread:** [what these corrections have in common]
**Proposed rule:** [imperative statement that would prevent recurrence]
**Target:** [which rule file or agent to update]
**Status:** pending_review
Save the pattern to .claude/learnings/patterns/{pattern-id}.json.
Output: Pattern table with proposed changes.
When a pattern has 5+ instances, OR the user approves a 3+ instance pattern, OR a learned rule contradicts a marketplace rule — propose a PR against the marketplace repo to share the improvement.
Determine what needs to change in the marketplace:
| Learning type | Marketplace change |
|---|---|
| Repeated correction about approach | New rule in the relevant plugin's rules/ directory |
| Pattern in a specific agent's domain | Update to the agent definition or skill |
| Regex pattern that catches a new class of correction | Update to scripts/classify-message.py seed patterns |
| Learned rule that should be shared | Move from .claude/rules/learned--*.md to the marketplace plugin's rules/ |
### Proposed upstream change
**Pattern:** [name] (observed [N] times across [M] sessions)
**Local rule:** `.claude/rules/learned--{topic}.md` (already active locally)
**Change type:** [new marketplace rule | skill update | agent update | script update]
**Target file in marketplace:** [path relative to marketplace repo root]
**Evidence:**
- Session [id1] ([date]): [correction summary]
- Session [id2] ([date]): [correction summary]
- Session [id3] ([date]): [correction summary]
**Proposed content:**
[the actual change — full file content or diff]
Submit as PR? (Y/n)
If approved:
Determine the marketplace repo location. Check:
source field in ~/.claude/settings.json under extraKnownMarketplacesCreate a branch and apply the change:
cd <marketplace-repo>
git checkout -b learning/learned--{topic}
# Write or edit the target file
git add <target-file>
git commit -m "feat: learned rule — {topic}
Pattern observed {N} times across {M} sessions.
Evidence: {session IDs}
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>"
Push and create the PR:
git push -u origin learning/learned--{topic}
gh pr create --title "Learning: {topic}" --body "$(cat <<'PREOF'
## Summary
This change was proposed by the learning system based on observed patterns.
**Pattern:** {description}
**Instances:** {N} across {M} sessions (first seen: {date}, last seen: {date})
**Local rule:** Has been active locally as `.claude/rules/learned--{topic}.md`
## Evidence
{table of session IDs, dates, and correction summaries}
## Change
{description of what changed and why}
---
Generated by `/thinking:retrospective` — the marketplace learning system.
PREOF
)"
Update the pattern status in .claude/learnings/patterns/{pattern-id}.json:
"status": "pr_submitted"Optionally remove the local learned rule if the PR is merged (the marketplace rule supersedes it).
Output: PR URL + updated pattern status.
analyse-session.py) handles JSONL parsing and pattern matching. This skill reads the structured output and applies judgment — classifying scope, filtering noise, detecting patterns..editorconfig is a config issue, not a learning. Only record corrections that reveal a gap in understanding or process.src/config.ts not config/index.ts" is project-specific. Classify correctly — universal learnings go to ~/.claude/learnings/, project-specific to .claude/learnings/.## Retrospective: [session-id]
### Metrics
| Metric | Value |
|---|---|
| Duration | [N] minutes |
| Turns | [N] user / [N] assistant |
| Corrections | [N] immediate, [N] approach changes, [N] reversals |
| Successes | [N] |
| Correction rate | [N]% |
### Learnings
| # | Type | Severity | Rule | Scope |
|---|---|---|---|---|
| 1 | [type] | [sev] | [rule] | [scope] |
### Patterns
[Any patterns detected from accumulated data]
## Learning Summary
### Totals (across [N] sessions)
| Metric | Value |
|---|---|
| Sessions analysed | [N] |
| Total corrections | [N] |
| Total successes | [N] |
| Active patterns | [N] |
| Pending proposals | [N] |
### Top patterns
| Pattern | Count | Proposed change | Status |
|---|---|---|---|
/thinking:learning — capture individual learnings in the moment. Retrospective analyses transcripts after the fact./thinking:wisdom — when patterns crystallise (85%+ confidence), promote them to wisdom frames./thinking:health-check — audits the learning system's coverage as part of project health.