Mine past session transcripts for uncaptured patterns, corrections, and knowledge. Cross-references with existing brain content. Triggers: /flux:ruminate, "ruminate", "mine my history".
From fluxnpx claudepluginhub nairon-ai/flux --plugin fluxThis skill uses the workspace's default tool permissions.
scripts/extract-conversations.pyExecutes pre-written implementation plans: critically reviews, follows bite-sized steps exactly, runs verifications, tracks progress with checkpoints, uses git worktrees, stops on blockers.
Guides idea refinement into designs: explores context, asks questions one-by-one, proposes approaches, presents sections for approval, writes/review specs before coding.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Mine conversation history for brain-worthy knowledge that was never captured. Complements /flux:reflect (current session) and /flux:meditate (brain vault audit) by looking at the full archive of past conversations.
Adapted from brainmaxxing by @poteto.
On entry, set the session phase:
PLUGIN_ROOT="${DROID_PLUGIN_ROOT:-${CLAUDE_PLUGIN_ROOT:-$(git rev-parse --show-toplevel 2>/dev/null || pwd)}}"
[ ! -d "$PLUGIN_ROOT/scripts" ] && PLUGIN_ROOT=$(ls -td ~/.claude/plugins/cache/nairon-flux/flux/*/ 2>/dev/null | head -1)
FLUXCTL="${PLUGIN_ROOT}/scripts/fluxctl"
$FLUXCTL session-phase set ruminate
On completion, reset:
$FLUXCTL session-phase set idle
Build a brain snapshot:
PLUGIN_ROOT="${DROID_PLUGIN_ROOT:-${CLAUDE_PLUGIN_ROOT:-$(git rev-parse --show-toplevel 2>/dev/null || pwd)}}"
[ ! -d "$PLUGIN_ROOT/scripts" ] && PLUGIN_ROOT=$(ls -td ~/.claude/plugins/cache/nairon-flux/flux/*/ 2>/dev/null | head -1)
sh "$PLUGIN_ROOT/skills/flux-meditate/scripts/snapshot.sh" .flux/brain/ /tmp/brain-snapshot-ruminate.md
Pass the snapshot path to each analysis agent. This avoids loading the full brain into the ruminate orchestrator's context.
Find the project conversation directory. Flux currently imports legacy Claude transcripts here:
~/.claude/projects/-<cwd-with-dashes-replacing-slashes>/
Run the extraction script to parse JSONL conversation files into readable text and split into batches:
python3 "$PLUGIN_ROOT/skills/flux-ruminate/scripts/extract-conversations.py" "$CONV_DIR" "$OUT_DIR" --batches N
Choose N based on the number of conversations found: ~1 batch per 20 conversations, minimum 2, maximum 10.
Create an agent team with N agents (one per batch), each with subagent_type: general and run all N in parallel.
Each agent's prompt should include:
$OUT_DIR/batches/batch_N.txt)$OUT_DIR/findings_N.md)Agents write structured findings to their output files.
After all agents complete, read all findings files. Cross-reference with existing brain content. Deduplicate across batches.
Filter by frequency and impact. Most findings won't be worth adding. Apply these filters before presenting:
Discard aggressively. It's better to present 3 high-signal findings than 9 that include noise.
Present findings to the user in a table with columns: finding, frequency/evidence, and proposed action. Be honest about which findings are one-offs vs. recurring patterns — let the user decide what's worth adding.
Route skill-specific learnings. Check if any findings are about how a specific skill should work — its process, prompts, edge cases, or troubleshooting. Update the skill's SKILL.md directly. Read the skill first to avoid duplicating or contradicting existing content.
Apply only the changes the user approves. Follow brain writing conventions:
[[wikilinks]] to connect related notes.flux/brain/index.md after all changesRemove the temporary extraction directory:
rm -rf "$OUT_DIR"