Reflect on session corrections and update CLAUDE.md (with human review)
Syncs learnings from session history and queue to CLAUDE.md/AGENTS.md with project-aware filtering and deduplication. Use this after sessions to capture corrections, or with --scan-history for first-time setup.
/plugin marketplace add BayramAnnakov/claude-reflect/plugin install bayramannakov-claude-reflect@BayramAnnakov/claude-reflect--dry-run: Preview all changes without prompting or writing.--scan-history: Scan ALL past sessions for corrections (useful for first-time setup or cold start).--days N: Limit history scan to last N days (default: 30). Only used with --scan-history.--targets: Show detected AI assistant config files and exit.--review: Show learnings with stale/decayed entries for review.--dedupe: Scan CLAUDE.md for similar entries and propose consolidations.--include-tool-errors: Include project-specific tool execution errors in scan (auto-enabled with --scan-history).cat ~/.claude/learnings-queue.json 2>/dev/null || echo "[]"pwdClaude-reflect syncs learnings to CLAUDE.md and AGENTS.md (the emerging cross-tool standard).
Supported Targets:
| Target | File Path | Format | Notes |
|---|---|---|---|
| Claude Code | ~/.claude/CLAUDE.md, ./CLAUDE.md | Markdown | Always enabled |
| AGENTS.md | ./AGENTS.md | Markdown | Industry standard (Codex, Cursor, Aider, Jules, Zed, Factory) |
Detection Logic:
# Always enabled
~/.claude/CLAUDE.md
./CLAUDE.md (if exists)
# Only if file exists
test -f AGENTS.md && echo "AGENTS.md"
Note on Confidence & Decay:
/reflect reviewBEFORE starting any work, use TodoWrite to create a task list for the entire workflow. This ensures no steps are skipped and provides visibility into progress.
Why this is critical:
Initialize with this task list (adjust based on arguments):
TodoWrite tasks for /reflect:
1. "Parse arguments and check flags" (--dry-run, --scan-history, etc.)
2. "Load learnings queue from ~/.claude/learnings-queue.json"
3. "Scan historical sessions" (if --scan-history)
4. "Validate learnings with semantic analysis"
5. "Filter by project context (global vs project-specific)"
6. "Deduplicate similar learnings"
7. "Check for duplicates in existing CLAUDE.md"
8. "Present summary and get user decision"
9. "Apply changes to CLAUDE.md/AGENTS.md"
10. "Clear queue and confirm completion"
Workflow rules:
Example TodoWrite call at start:
{
"todos": [
{"content": "Parse arguments (--scan-history detected)", "status": "in_progress", "activeForm": "Parsing command arguments"},
{"content": "Load learnings queue", "status": "pending", "activeForm": "Loading queue from ~/.claude/learnings-queue.json"},
{"content": "Scan historical sessions", "status": "pending", "activeForm": "Scanning past sessions for corrections"},
{"content": "Validate with semantic analysis", "status": "pending", "activeForm": "Validating learnings semantically"},
{"content": "Filter by project context", "status": "pending", "activeForm": "Filtering global vs project learnings"},
{"content": "Deduplicate similar learnings", "status": "pending", "activeForm": "Removing duplicate learnings"},
{"content": "Check existing CLAUDE.md for duplicates", "status": "pending", "activeForm": "Checking for existing entries"},
{"content": "Present summary to user", "status": "pending", "activeForm": "Presenting learnings summary"},
{"content": "Apply changes to target files", "status": "pending", "activeForm": "Writing to CLAUDE.md/AGENTS.md"},
{"content": "Clear queue and confirm", "status": "pending", "activeForm": "Finalizing and clearing queue"}
]
}
DO NOT PROCEED to the next section until you have initialized the task list with TodoWrite.
If user passed --targets:
Detect and display all AI assistant config files in the current project:
echo "=== Detected AI Assistant Configs ==="
echo ""
echo "✓ ~/.claude/CLAUDE.md (Claude Code - always enabled)"
test -f CLAUDE.md && echo "✓ ./CLAUDE.md (Project)" || echo "✗ ./CLAUDE.md (not found)"
test -f AGENTS.md && echo "✓ AGENTS.md (Codex, Cursor, Aider, Jules, Zed)" || echo "✗ AGENTS.md (not found)"
Then display summary:
═══════════════════════════════════════════════════════════
DETECTED TARGETS
═══════════════════════════════════════════════════════════
✓ ~/.claude/CLAUDE.md (Claude Code - always enabled)
✓ ./CLAUDE.md (Project)
✗ AGENTS.md (not found)
To enable AGENTS.md (syncs to Codex, Cursor, Aider, Jules, Zed, Factory):
touch AGENTS.md
═══════════════════════════════════════════════════════════
Exit after showing targets (don't process learnings).
If user passed --review:
Show learnings with their confidence and decay status:
cat ~/.claude/learnings-queue.json | jq -r '.[] | "\(.timestamp) | conf:\(.confidence // 0.5) | decay:\(.decay_days // 90)d | \(.message | .[0:60])"'
Display table of learnings with decay status:
═══════════════════════════════════════════════════════════
LEARNINGS REVIEW — Confidence & Decay Status
═══════════════════════════════════════════════════════════
┌────┬──────────┬────────┬────────────────────────────────┐
│ # │ Conf. │ Decay │ Learning │
├────┼──────────┼────────┼────────────────────────────────┤
│ 1 │ 0.90 ✓ │ 120d │ Use gpt-5.1 for reasoning │
│ 2 │ 0.60 │ 60d ⚠ │ Enable flag X for API calls │
│ 3 │ 0.40 ⚠ │ 30d ⚠ │ Consider using batch mode │
└────┴──────────┴────────┴────────────────────────────────┘
Legend: ✓ High confidence ⚠ Low confidence/Near decay
═══════════════════════════════════════════════════════════
Exit after showing review (don't process learnings).
If user passed --dedupe:
Scan existing CLAUDE.md files for similar entries that could be consolidated.
1. Read both CLAUDE.md files:
cat ~/.claude/CLAUDE.md
cat CLAUDE.md 2>/dev/null
2. Extract all bullet points:
Look for lines starting with - under section headers.
3. Analyze for semantic similarity: Group entries that:
4. Present consolidation proposals:
═══════════════════════════════════════════════════════════
CLAUDE.MD DEDUPLICATION SCAN
═══════════════════════════════════════════════════════════
Found 2 groups of similar entries:
Group 1 (Global CLAUDE.md):
Line 45: "- Use gpt-5.1 for complex tasks"
Line 52: "- Prefer gpt-5.1 for reasoning"
→ Proposed: "- Use gpt-5.1 for complex reasoning tasks"
Group 2 (Project CLAUDE.md):
Line 12: "- Always use venv"
Line 28: "- Create virtual environment for Python"
→ Proposed: "- Use venv for Python projects"
No duplicates: 23 entries are unique
═══════════════════════════════════════════════════════════
5. Use AskUserQuestion:
{
"questions": [{
"question": "Apply deduplication to CLAUDE.md files?",
"header": "Dedupe",
"multiSelect": false,
"options": [
{"label": "Apply all consolidations", "description": "Merge 2 groups, remove 4 redundant lines"},
{"label": "Review each group", "description": "Decide per group"},
{"label": "Cancel", "description": "Keep files unchanged"}
]
}]
}
6. Apply changes:
Exit after deduplication (don't process queue).
Check if /reflect has been run in THIS project before. Run these commands separately:
WARNING: Do NOT combine these into a single compound command with $(...). Claude Code's bash executor mangles subshell syntax. Run each command individually and manually substitute the result.
ls ~/.claude/projects/ | grep -i "$(basename "$(pwd)")"
test -f ~/.claude/projects/PROJECT_FOLDER/.reflect-initialized && echo "initialized" || echo "first-run"
If "first-run" for this project AND user did NOT pass --scan-history:
Use AskUserQuestion to recommend historical scan:
{
"questions": [{
"question": "First time running /reflect in this project. Scan past sessions for learnings?",
"header": "First run",
"multiSelect": false,
"options": [
{"label": "Yes, scan history (Recommended)", "description": "Find corrections from past sessions in this project"},
{"label": "No, just process queue", "description": "Only process learnings captured by hooks"}
]
}]
}
If user chooses "Yes, scan history", proceed as if --scan-history was passed.
If user passed --dry-run:
If user passed --scan-history:
Scan past sessions for corrections missed by hooks. Useful for:
0.5a. Find ALL session files for this project:
First, list project folders to find the correct path pattern:
ls ~/.claude/projects/ | grep -i "$(basename $(pwd))"
Handle underscores vs hyphens: Directory names may use underscores (darwin_new) but encoded paths use hyphens (darwin-new). If first grep fails, try replacing underscores:
# If no match, try with hyphens instead of underscores
ls ~/.claude/projects/ | grep -i "$(basename $(pwd) | tr '_' '-')"
Then list ALL session files in that folder:
ls ~/.claude/projects/[PROJECT_FOLDER]/*.jsonl
Note: Project paths have / replaced with -. For /Users/bob/code/myapp, look for -Users-bob-code-myapp.
IMPORTANT: With --scan-history, process ALL session files (not just recent ones). This includes:
fa5ae539-d170-4fa8-a8d2-bf50b3ec2861.jsonl)agent-*.jsonl) - these may contain corrections too--days N filter by checking file modification times if specified0.5b. Extract corrections from session files:
Session files are JSONL. Use jq to extract user messages, then grep for patterns.
CRITICAL: Filter out command expansion messages using isMeta != true. Command expansions (like /reflect itself) are stored with isMeta: true and contain documentation text that would cause false positives.
DYNAMIC PATTERN SELECTION: Before running grep, sample a few user messages to detect the conversation language. If non-English, adapt the patterns accordingly:
| Language | Example patterns to add |
|---|---|
| Russian | нет,? используй|не используй|на самом деле|запомни:|лучше|предпочитаю |
| Spanish | no,? usa|no uses|en realidad|recuerda:|prefiero|siempre usa |
| German | nein,? verwende|nicht verwenden|eigentlich|merke:|bevorzuge|immer |
Generate appropriate patterns for the detected language and combine with English patterns.
Default English patterns: remember:, no, use, don't use, actually, stop using, never use, that's wrong, I meant, use X not Y
For each .jsonl file in the project folder, extract user messages that match correction patterns. Use your judgment on the best extraction method - you can use Read, Grep, Bash with jq, or any combination that works.
What to extract:
type: "user" entries with isMeta != true)toolUseResult fields containing "user said:" followed by feedback text
Key file structure:
~/.claude/projects/[PROJECT_FOLDER]/*.jsonl{"type": "user", "message": {"content": [{"type": "text", "text": "..."}]}}{"toolUseResult": "The user doesn't want to proceed\nuser said:\n[feedback]"}0.5b-extra. Tool rejections are HIGH confidence:
When a user stops a tool and provides feedback, this is a strong correction signal. The feedback appears after "user said:" (may be on the next line in the JSON).
CRITICAL: Tool rejections MUST be shown to user:
0.5c. Apply date filter if --days N specified:
0.5d. Semantic Validation (Enhanced LLM Filter):
For each extracted correction, use semantic analysis to determine if it's a REUSABLE learning.
Preferred: Use semantic detector for accuracy:
# scripts/lib/semantic_detector.py
from lib.semantic_detector import semantic_analyze
result = semantic_analyze(message)
# Returns: is_learning, type, confidence, reasoning, extracted_learning
Benefits of semantic analysis:
extracted_learning for CLAUDE.mdIf semantic analysis is unavailable (timeout, CLI error), fall back to heuristic rules:
CRITICAL RULES:
remember: items - these are explicit user requests, always present themREJECT ONLY if clearly:
ACCEPT if it mentions:
TRUST USER CORRECTIONS: For model names, API versions, tool availability, and flag/parameter values - the user has more current knowledge than Claude's training data. Do NOT try to validate whether something "exists" or is "correct". Accept user corrections as authoritative.
BORDERLINE → Get context first: If a correction seems context-specific (like "please enable that flag"), search for surrounding messages to understand WHAT flag/parameter. Often these ARE reusable learnings about API parameters.
# Get context around a correction (find line number, then show surrounding)
grep -n "enable that flag" "$SESSION_FILE" | head -1
For each ACCEPTED correction, create:
0.5e. Deduplicate:
0.5f. Build working list:
SANITY CHECK before proceeding:
MANDATORY PRESENTATION RULE: If your extraction (grep, search, jq) found ANY matches:
Format for presenting raw matches:
═══════════════════════════════════════════════════════════
RAW MATCHES FOUND — [N] items need review
═══════════════════════════════════════════════════════════
#1 [source: session-scan | tool-rejection]
"[raw text from extraction]"
→ Proposed: [actionable learning] | Scope: [global/project]
#2 ...
═══════════════════════════════════════════════════════════
Then use AskUserQuestion to let user select which to keep.
NEVER conclude "0 learnings found" if:
0.5g. Extract Tool Execution Errors (Project-Specific Only):
Scan session files for REPEATED tool execution errors that reveal project-specific context.
What to extract:
Tool execution errors (is_error: true) that indicate project-specific issues:
What to EXCLUDE:
Use the extraction script:
python scripts/extract_tool_errors.py --project "$(pwd)" --min-count 2 --json
Or use the utility functions directly:
from lib.reflect_utils import extract_tool_errors, aggregate_tool_errors
# Extract from session files
errors = []
for session_file in session_files:
errors.extend(extract_tool_errors(session_file, project_specific_only=True))
# Aggregate by pattern (only keep 2+ occurrences)
aggregated = aggregate_tool_errors(errors, min_occurrences=2)
Semantic validation (optional):
from lib.semantic_detector import validate_tool_errors
validated = validate_tool_errors(aggregated)
Add validated error patterns to working list:
suggested_guideline or refined_guideline as the proposed entryExample output:
═══════════════════════════════════════════════════════════
TOOL ERROR PATTERNS — [N] project-specific issues found
═══════════════════════════════════════════════════════════
#1 [connection_refused] — 5 occurrences
Sample: "Connection refused to localhost:5432"
→ Proposed: "Check DATABASE_URL in .env for PostgreSQL connection"
#2 [env_undefined] — 3 occurrences
Sample: "SUPABASE_URL is not defined"
→ Proposed: "Load .env file before accessing environment variables"
═══════════════════════════════════════════════════════════
If tool error patterns are found, add them to the working list alongside user corrections.
~/.claude/learnings-queue.json--scan-history will add itemsAfter loading queue items, validate them using AI-powered semantic analysis. This provides:
1.5a. Run semantic analysis on each queue item:
Use the semantic detector (via claude -p) to analyze each queued message:
# scripts/lib/semantic_detector.py provides:
from lib.semantic_detector import validate_queue_items
# For each item, semantic analysis returns:
# {
# "is_learning": bool, # Should this be kept?
# "type": "correction"|"positive"|"explicit"|null,
# "confidence": 0.0-1.0, # AI's confidence score
# "reasoning": str, # Why it was classified this way
# "extracted_learning": str # Clean, actionable statement
# }
1.5b. Filter and enhance queue items:
For each queue item:
item["message"]is_learning == false → remove from working list (not a reusable learning)is_learning == true:
max(original_confidence, semantic_confidence)extracted_learning for cleaner CLAUDE.md entriessemantic_type to preserve classification1.5c. Fallback behavior:
If semantic analysis fails (timeout, Claude CLI not available, API error):
1.5d. Report filtering results:
Show what was filtered:
═══════════════════════════════════════════════════════════
SEMANTIC VALIDATION — [N] items analyzed
═══════════════════════════════════════════════════════════
✓ [M] confirmed as learnings
✗ [K] filtered (not reusable learnings)
Filtered items (skipped):
- "Hello, how are you?" — [greeting, not a learning]
- "Can you help me?" — [question, not a learning]
═══════════════════════════════════════════════════════════
NOTE: remember: items are ALWAYS kept regardless of semantic analysis — explicit user requests are never filtered.
Note: This step is for analyzing the CURRENT session only (when NOT using --scan-history).
If --scan-history was passed, skip to Step 3 with results from Step 0.5.
Analyze the current session for corrections missed by real-time hooks:
2a. Find current session file:
List session files for this project (most recent first):
ls -lt ~/.claude/projects/ | grep -i "$(basename $(pwd))"
Then list files in that folder and pick the most recent non-agent file:
ls -lt ~/.claude/projects/[PROJECT_FOLDER]/*.jsonl | head -5
Agent files (agent-*.jsonl) are sub-conversations; focus on main session files for current session analysis.
2b. Extract tool rejections (HIGH confidence corrections):
Search the current session file for toolUseResult fields containing "user said:" followed by feedback. These are high-confidence corrections.
2c. Extract user messages with correction patterns:
Search the current session file for user messages matching correction patterns. Use the same patterns from Step 0.5b. Remember:
isMeta: true entries (command expansions like /reflect itself)2d. Also reflect on conversation context:
2e. Semantic Validation (LLM Filter):
If there are extracted corrections from 2b or 2c, use semantic analysis for accurate classification:
from lib.semantic_detector import semantic_analyze
result = semantic_analyze(message)
# Use result["is_learning"], result["extracted_learning"], result["confidence"]
If semantic analysis is unavailable, fall back to heuristic evaluation:
2f. Add findings to working list: For each ACCEPTED learning:
Get current project path. For each queue item, compare item.project with current project:
CASE A: Same project
CASE B: Different project, looks GLOBAL (message contains: gpt-, claude-, model names, general patterns like "always/never")
CASE C: Different project, looks PROJECT-SPECIFIC (message contains: specific DB names, file paths, project-specific tools)
Heuristics:
gpt-[0-9] or claude- → GLOBAL (model name)always|never|don't + generic verb → GLOBAL (general rule)Before checking against CLAUDE.md, consolidate similar learnings within the current batch.
3.5a. Group by semantic similarity:
Analyze all learnings in the working list. Look for entries that:
Example - Before consolidation:
1. "Use gpt-5.1 for complex tasks"
2. "Prefer gpt-5.1 over gpt-5 for reasoning"
3. "gpt-5.1 is better for hard problems"
Example - After consolidation:
1. "Use gpt-5.1 for complex reasoning (replaces gpt-5)"
3.5b. Present consolidation proposals:
If similar learnings are detected, show:
═══════════════════════════════════════════════════════════
SIMILAR LEARNINGS DETECTED
═══════════════════════════════════════════════════════════
These 3 learnings appear related:
#2: "Use gpt-5.1 for complex tasks"
#5: "Prefer gpt-5.1 over gpt-5 for reasoning"
#7: "gpt-5.1 is better for hard problems"
Proposed consolidation:
→ "Use gpt-5.1 for complex reasoning tasks (replaces gpt-5)"
═══════════════════════════════════════════════════════════
3.5c. Use AskUserQuestion for consolidation:
{
"questions": [{
"question": "Consolidate these 3 similar learnings into one?",
"header": "Dedupe",
"multiSelect": false,
"options": [
{"label": "Yes, consolidate", "description": "Merge into: 'Use gpt-5.1 for complex reasoning tasks'"},
{"label": "Keep separate", "description": "Add all 3 as individual entries"},
{"label": "Edit consolidation", "description": "Let me modify the merged text"}
]
}]
}
3.5d. Consolidation rules:
3.5e. Skip if no duplicates:
For each learning kept after filtering, search BOTH CLAUDE.md files:
grep -n -i "keyword" ~/.claude/CLAUDE.md
grep -n -i "keyword" CLAUDE.md
If duplicate found:
5a. Display condensed summary table:
Show all learnings in a compact table format:
════════════════════════════════════════════════════════════
LEARNINGS SUMMARY — [N] items found
════════════════════════════════════════════════════════════
┌────┬─────────────────────────────────────────┬──────────┬────────┐
│ # │ Learning │ Scope │ Status │
├────┼─────────────────────────────────────────┼──────────┼────────┤
│ 1 │ Use DB for persistent storage │ project │ ✓ new │
│ 2 │ Backoff on actual errors only │ global │ ✓ new │
│ ...│ ... │ ... │ ... │
└────┴─────────────────────────────────────────┴──────────┴────────┘
Destinations: [N] → Global, [M] → Project
Duplicates: [K] items will be merged with existing entries
5b. Use AskUserQuestion for strategy:
Use the AskUserQuestion tool:
{
"questions": [{
"question": "How would you like to process these [N] learnings?",
"header": "Action",
"multiSelect": false,
"options": [
{"label": "Apply all (Recommended)", "description": "Add [X] new entries, merge [K] duplicates with recommended scopes"},
{"label": "Select which to apply", "description": "Choose specific learnings from grouped lists"},
{"label": "Review details first", "description": "Show full details for each learning before deciding"},
{"label": "Skip all", "description": "Don't apply any learnings, clear the queue"}
]
}]
}
5c. Handle user selection:
Full learning card format (for "Review details first"):
════════════════════════════════════════════════════════════
LEARNING [N] of [TOTAL] — [source: queued/session-scan/tool-rejection]
════════════════════════════════════════════════════════════
Original message:
"[the user's original text]"
Proposed addition:
┌──────────────────────────────────────────────────────────┐
│ ## [Section Name] │
│ - [Exact bullet point that will be added] │
└──────────────────────────────────────────────────────────┘
Duplicate check:
✓ None found
OR
⚠️ SIMILAR in [global/project] CLAUDE.md:
Line [N]: "[existing content]"
════════════════════════════════════════════════════════════
Group learnings by destination and use AskUserQuestion with multiSelect.
Rules:
Example for GLOBAL learnings:
{
"questions": [
{
"question": "Select GLOBAL learnings to apply:",
"header": "Global",
"multiSelect": true,
"options": [
{"label": "#2 Backoff errors", "description": "Implement backoff only on actual errors, not artificial delays"},
{"label": "#3 DB cache", "description": "Use local database cache to minimize data fetching"},
{"label": "#4 Batch+delays", "description": "Use batching with stochastic delays for API rate limits"},
{"label": "#5 Use venv", "description": "Always use virtual environments for Python projects"}
]
}
]
}
If >4 global items: Add second question with header "Global+"
Example for PROJECT learnings:
{
"questions": [
{
"question": "Select PROJECT learnings to apply:",
"header": "Project",
"multiSelect": true,
"options": [
{"label": "#1 DB storage", "description": "Use database for persistent tracking data"},
{"label": "#6 DB ports", "description": "Assign unique ports per database instance"}
]
}
]
}
Selection rules:
6a. Show summary of changes:
════════════════════════════════════════════════════════════
SUMMARY: [N] changes ready to apply
════════════════════════════════════════════════════════════
Project CLAUDE.md ([path]):
Line [N]: UPDATE "[old]" → "[new]"
After line [N]: ADD "[new entry]"
Global CLAUDE.md (~/.claude/CLAUDE.md):
Line [N]: REPLACE "[old]" → "[new]"
After line [N]: ADD "[new entry]"
Skipped: [N] learnings (including [M] from other projects)
════════════════════════════════════════════════════════════
6b. Use AskUserQuestion for confirmation:
{
"questions": [{
"question": "Apply [N] learnings to CLAUDE.md files?",
"header": "Confirm",
"multiSelect": false,
"options": [
{"label": "Yes, apply all", "description": "[X] to Global, [Y] to Project CLAUDE.md"},
{"label": "Go back", "description": "Return to selection to adjust"},
{"label": "Cancel", "description": "Don't apply anything, keep queue"}
]
}]
}
6c. Handle response:
Only after final confirmation:
7a. Apply to CLAUDE.md (Primary Targets):
7b. Apply to AGENTS.md (if exists):
Check if AGENTS.md exists:
test -f AGENTS.md && echo "AGENTS.md found"
If AGENTS.md exists, apply the SAME learnings using this format:
## Claude-Reflect Learnings
<!-- Auto-generated by claude-reflect. Do not edit this section manually. -->
### Model Preferences
- Use gpt-5.1 for reasoning tasks
### Tool Usage
- Use local database cache to minimize API calls
<!-- End claude-reflect section -->
Update Strategy:
<!-- Auto-generated by claude-reflect marker<!-- End claude-reflect section -->)echo "[]" > ~/.claude/learnings-queue.json
════════════════════════════════════════════════════════════
DONE: Applied [N] learnings
════════════════════════════════════════════════════════════
✓ ~/.claude/CLAUDE.md [N] entries
✓ ./CLAUDE.md [N] entries
✓ AGENTS.md [N] entries (if exists)
Skipped: [N]
════════════════════════════════════════════════════════════
Create marker file for THIS project so first-run detection won't trigger again. Use the PROJECT_FOLDER you found in First-Run Detection:
touch ~/.claude/projects/PROJECT_FOLDER/.reflect-initialized
Replace PROJECT_FOLDER with the actual folder name (e.g., -Users-bob-myproject).
(e.g., gpt-5.2 not gpt-5.1)Use these standard headers:
## LLM Model Recommendations — model names, versions## Tool Usage — MCP, APIs, which tool for what## Project Conventions — coding style, patterns## Common Errors to Avoid — gotchas, mistakes## Environment Setup — venv, configs, pathsIf CLAUDE.md exceeds 150 lines, warn:
Note: CLAUDE.md is [N] lines. Consider consolidating entries.
/reflectReflect on previus response and output, based on Self-refinement framework for iterative improvement with complexity triage and verification